NSM4D: Neural Scene Model Based Online 4D Point Cloud Sequence
Understanding
- URL: http://arxiv.org/abs/2310.08326v1
- Date: Thu, 12 Oct 2023 13:42:49 GMT
- Title: NSM4D: Neural Scene Model Based Online 4D Point Cloud Sequence
Understanding
- Authors: Yuhao Dong, Zhuoyang Zhang, Yunze Liu, Li Yi
- Abstract summary: We introduce a generic online 4D perception paradigm called NSM4D.
NSM4D serves as a plug-and-play strategy that can be adapted to existing 4D backbones.
We demonstrate significant improvements on various online perception benchmarks in indoor and outdoor settings.
- Score: 20.79861588128133
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding 4D point cloud sequences online is of significant practical
value in various scenarios such as VR/AR, robotics, and autonomous driving. The
key goal is to continuously analyze the geometry and dynamics of a 3D scene as
unstructured and redundant point cloud sequences arrive. And the main challenge
is to effectively model the long-term history while keeping computational costs
manageable. To tackle these challenges, we introduce a generic online 4D
perception paradigm called NSM4D. NSM4D serves as a plug-and-play strategy that
can be adapted to existing 4D backbones, significantly enhancing their online
perception capabilities for both indoor and outdoor scenarios. To efficiently
capture the redundant 4D history, we propose a neural scene model that
factorizes geometry and motion information by constructing geometry tokens
separately storing geometry and motion features. Exploiting the history becomes
as straightforward as querying the neural scene model. As the sequence
progresses, the neural scene model dynamically deforms to align with new
observations, effectively providing the historical context and updating itself
with the new observations. By employing token representation, NSM4D also
exhibits robustness to low-level sensor noise and maintains a compact size
through a geometric sampling scheme. We integrate NSM4D with state-of-the-art
4D perception backbones, demonstrating significant improvements on various
online perception benchmarks in indoor and outdoor settings. Notably, we
achieve a 9.6% accuracy improvement for HOI4D online action segmentation and a
3.4% mIoU improvement for SemanticKITTI online semantic segmentation.
Furthermore, we show that NSM4D inherently offers excellent scalability to
longer sequences beyond the training set, which is crucial for real-world
applications.
Related papers
- Dynamics-Aware Gaussian Splatting Streaming Towards Fast On-the-Fly Training for 4D Reconstruction [12.111389926333592]
Current 3DGS-based streaming methods treat the Gaussian primitives uniformly and constantly renew the densified Gaussians.
We propose a novel three-stage pipeline for iterative streamable 4D dynamic spatial reconstruction.
Our method achieves state-of-the-art performance in online 4D reconstruction, demonstrating a 20% improvement in on-the-fly training speed, superior representation quality, and real-time rendering capability.
arXiv Detail & Related papers (2024-11-22T10:47:47Z) - Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models [116.31344506738816]
We present a novel framework, textbfDiffusion4D, for efficient and scalable 4D content generation.
We develop a 4D-aware video diffusion model capable of synthesizing orbital views of dynamic 3D assets.
Our method surpasses prior state-of-the-art techniques in terms of generation efficiency and 4D geometry consistency.
arXiv Detail & Related papers (2024-05-26T17:47:34Z) - 4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency [118.15258850780417]
This work introduces 4DGen, a novel framework for grounded 4D content creation.
We identify static 3D assets and monocular video sequences as key components in constructing the 4D content.
Our pipeline facilitates conditional 4D generation, enabling users to specify geometry (3D assets) and motion (monocular videos)
arXiv Detail & Related papers (2023-12-28T18:53:39Z) - Complete-to-Partial 4D Distillation for Self-Supervised Point Cloud
Sequence Representation Learning [14.033085586047799]
This paper proposes a new 4D self-supervised pre-training method called Complete-to-Partial 4D Distillation.
Our key idea is to formulate 4D self-supervised representation learning as a teacher-student knowledge distillation framework.
Experiments show that this approach significantly outperforms previous pre-training approaches on a wide range of 4D point cloud sequence understanding tasks.
arXiv Detail & Related papers (2022-12-10T16:26:19Z) - LoRD: Local 4D Implicit Representation for High-Fidelity Dynamic Human
Modeling [69.56581851211841]
We propose a novel Local 4D implicit Representation for Dynamic clothed human, named LoRD.
Our key insight is to encourage the network to learn the latent codes of local part-level representation.
LoRD has strong capability for representing 4D human, and outperforms state-of-the-art methods on practical applications.
arXiv Detail & Related papers (2022-08-18T03:49:44Z) - H4D: Human 4D Modeling by Learning Neural Compositional Representation [75.34798886466311]
This work presents a novel framework that can effectively learn a compact and compositional representation for dynamic human.
A simple yet effective linear motion model is proposed to provide a rough and regularized motion estimation.
Experiments demonstrate our method is not only efficacy in recovering dynamic human with accurate motion and detailed geometry, but also amenable to various 4D human related tasks.
arXiv Detail & Related papers (2022-03-02T17:10:49Z) - 4D-Net for Learned Multi-Modal Alignment [87.58354992455891]
We present 4D-Net, a 3D object detection approach, which utilizes 3D Point Cloud and RGB sensing information, both in time.
We are able to incorporate the 4D information by performing a novel connection learning across various feature representations and levels of abstraction, as well as by observing geometric constraints.
arXiv Detail & Related papers (2021-09-02T16:35:00Z) - Learning Parallel Dense Correspondence from Spatio-Temporal Descriptors
for Efficient and Robust 4D Reconstruction [43.60322886598972]
This paper focuses on the task of 4D shape reconstruction from a sequence of point clouds.
We present a novel pipeline to learn a temporal evolution of the 3D human shape through capturing continuous transformation functions among cross-frame occupancy fields.
arXiv Detail & Related papers (2021-03-30T13:36:03Z) - V4D:4D Convolutional Neural Networks for Video-level Representation
Learning [58.548331848942865]
Most 3D CNNs for video representation learning are clip-based, and thus do not consider video-temporal evolution of features.
We propose Video-level 4D Conal Neural Networks, or V4D, to model long-range representation with 4D convolutions.
V4D achieves excellent results, surpassing recent 3D CNNs by a large margin.
arXiv Detail & Related papers (2020-02-18T09:27:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.