MovingParts: Motion-based 3D Part Discovery in Dynamic Radiance Field
- URL: http://arxiv.org/abs/2303.05703v2
- Date: Fri, 7 Apr 2023 06:57:06 GMT
- Title: MovingParts: Motion-based 3D Part Discovery in Dynamic Radiance Field
- Authors: Kaizhi Yang, Xiaoshuai Zhang, Zhiao Huang, Xuejin Chen, Zexiang Xu,
Hao Su
- Abstract summary: We present MovingParts, a NeRF-based method for dynamic scene reconstruction and part discovery.
Under the Lagrangian view, we parameterize the scene motion by tracking the trajectory of particles on objects.
The Lagrangian view makes it convenient to discover parts by factorizing the scene motion as a composition of part-level rigid motions.
- Score: 42.236015785792965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present MovingParts, a NeRF-based method for dynamic scene reconstruction
and part discovery. We consider motion as an important cue for identifying
parts, that all particles on the same part share the common motion pattern.
From the perspective of fluid simulation, existing deformation-based methods
for dynamic NeRF can be seen as parameterizing the scene motion under the
Eulerian view, i.e., focusing on specific locations in space through which the
fluid flows as time passes. However, it is intractable to extract the motion of
constituting objects or parts using the Eulerian view representation. In this
work, we introduce the dual Lagrangian view and enforce representations under
the Eulerian/Lagrangian views to be cycle-consistent. Under the Lagrangian
view, we parameterize the scene motion by tracking the trajectory of particles
on objects. The Lagrangian view makes it convenient to discover parts by
factorizing the scene motion as a composition of part-level rigid motions.
Experimentally, our method can achieve fast and high-quality dynamic scene
reconstruction from even a single moving camera, and the induced part-based
representation allows direct applications of part tracking, animation, 3D scene
editing, etc.
Related papers
- Shape of Motion: 4D Reconstruction from a Single Video [51.04575075620677]
We introduce a method capable of reconstructing generic dynamic scenes, featuring explicit, full-sequence-long 3D motion.
We exploit the low-dimensional structure of 3D motion by representing scene motion with a compact set of SE3 motion bases.
Our method achieves state-of-the-art performance for both long-range 3D/2D motion estimation and novel view synthesis on dynamic scenes.
arXiv Detail & Related papers (2024-07-18T17:59:08Z) - DEMOS: Dynamic Environment Motion Synthesis in 3D Scenes via Local
Spherical-BEV Perception [54.02566476357383]
We propose the first Dynamic Environment MOtion Synthesis framework (DEMOS) to predict future motion instantly according to the current scene.
We then use it to dynamically update the latent motion for final motion synthesis.
The results show our method outperforms previous works significantly and has great performance in handling dynamic environments.
arXiv Detail & Related papers (2024-03-04T05:38:16Z) - Motion Segmentation from a Moving Monocular Camera [3.115818438802931]
We take advantage of two popular branches of monocular motion segmentation approaches: point trajectory based and optical flow based methods.
We are able to model various complex object motions in different scene structures at once.
Our method shows state-of-the-art performance on the KT3DMoSeg dataset.
arXiv Detail & Related papers (2023-09-24T22:59:05Z) - 3D Motion Magnification: Visualizing Subtle Motions with Time Varying
Radiance Fields [58.6780687018956]
We present a 3D motion magnification method that can magnify subtle motions from scenes captured by a moving camera.
We represent the scene with time-varying radiance fields and leverage the Eulerian principle for motion magnification.
We evaluate the effectiveness of our method on both synthetic and real-world scenes captured under various camera setups.
arXiv Detail & Related papers (2023-08-07T17:59:59Z) - NeuralDiff: Segmenting 3D objects that move in egocentric videos [92.95176458079047]
We study the problem of decomposing the observed 3D scene into a static background and a dynamic foreground.
This task is reminiscent of the classic background subtraction problem, but is significantly harder because all parts of the scene, static and dynamic, generate a large apparent motion.
In particular, we consider egocentric videos and further separate the dynamic component into objects and the actor that observes and moves them.
arXiv Detail & Related papers (2021-10-19T12:51:35Z) - Motion Representations for Articulated Animation [34.54825980226596]
We propose novel motion representations for animating articulated objects consisting of distinct parts.
In a completely unsupervised manner, our method identifies object parts, tracks them in a driving video, and infers their motions by considering their principal axes.
Our model can animate a variety of objects, surpassing previous methods by a large margin on existing benchmarks.
arXiv Detail & Related papers (2021-04-22T18:53:56Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z) - Semantic Flow-guided Motion Removal Method for Robust Mapping [7.801798747561309]
We propose a novel motion removal method, leveraging semantic information and optical flow to extract motion regions.
The ORB-SLAM2 integrated with the proposed motion removal method achieved the best performance in both indoor and outdoor dynamic environments.
arXiv Detail & Related papers (2020-10-14T08:40:16Z) - DymSLAM:4D Dynamic Scene Reconstruction Based on Geometrical Motion
Segmentation [22.444657614883084]
DymSLAM is a dynamic stereo visual SLAM system capable of reconstructing a 4D (3D + time) dynamic scene with rigid moving objects.
The proposed system allows the robot to be employed for high-level tasks, such as obstacle avoidance for dynamic objects.
arXiv Detail & Related papers (2020-03-10T08:25:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.