4DComplete: Non-Rigid Motion Estimation Beyond the Observable Surface
- URL: http://arxiv.org/abs/2105.01905v1
- Date: Wed, 5 May 2021 07:39:12 GMT
- Title: 4DComplete: Non-Rigid Motion Estimation Beyond the Observable Surface
- Authors: Yang Li, Hikari Takehara, Takafumi Taketomi, Bo Zheng, Matthias
Nie{\ss}ner
- Abstract summary: We introduce 4DComplete, a novel data-driven approach that estimates the non-rigid motion for the unobserved geometry.
For network training, we constructed a large-scale synthetic dataset called DeformingThings4D.
- Score: 7.637832293935966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tracking non-rigidly deforming scenes using range sensors has numerous
applications including computer vision, AR/VR, and robotics. However, due to
occlusions and physical limitations of range sensors, existing methods only
handle the visible surface, thus causing discontinuities and incompleteness in
the motion field. To this end, we introduce 4DComplete, a novel data-driven
approach that estimates the non-rigid motion for the unobserved geometry.
4DComplete takes as input a partial shape and motion observation, extracts 4D
time-space embedding, and jointly infers the missing geometry and motion field
using a sparse fully-convolutional network. For network training, we
constructed a large-scale synthetic dataset called DeformingThings4D, which
consists of 1972 animation sequences spanning 31 different animals or humanoid
categories with dense 4D annotation. Experiments show that 4DComplete 1)
reconstructs high-resolution volumetric shape and motion field from a partial
observation, 2) learns an entangled 4D feature representation that benefits
both shape and motion estimation, 3) yields more accurate and natural
deformation than classic non-rigid priors such as As-Rigid-As-Possible (ARAP)
deformation, and 4) generalizes well to unseen objects in real-world sequences.
Related papers
- S4D: Streaming 4D Real-World Reconstruction with Gaussians and 3D Control Points [30.46796069720543]
We introduce a novel approach for streaming 4D real-world reconstruction utilizing discrete 3D control points.
This method physically models local rays and establishes a motion-decoupling coordinate system.
By effectively merging traditional graphics with learnable pipelines, it provides a robust and efficient local 6-degrees-of-freedom (6 DoF) motion representation.
arXiv Detail & Related papers (2024-08-23T12:51:49Z) - 4DRecons: 4D Neural Implicit Deformable Objects Reconstruction from a single RGB-D Camera with Geometrical and Topological Regularizations [35.161541396566705]
4DRecons encodes the output as a 4D neural implicit surface.
We show that 4DRecons can handle large deformations and complex inter-part interactions.
arXiv Detail & Related papers (2024-06-14T16:38:00Z) - Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models [116.31344506738816]
We present a novel framework, textbfDiffusion4D, for efficient and scalable 4D content generation.
We develop a 4D-aware video diffusion model capable of synthesizing orbital views of dynamic 3D assets.
Our method surpasses prior state-of-the-art techniques in terms of generation efficiency and 4D geometry consistency.
arXiv Detail & Related papers (2024-05-26T17:47:34Z) - MagicPose4D: Crafting Articulated Models with Appearance and Motion Control [17.161695123524563]
We propose MagicPose4D, a novel framework for refined control over both appearance and motion in 4D generation.
Unlike traditional methods, MagicPose4D accepts monocular videos as motion prompts, enabling precise and customizable motion generation.
We demonstrate that MagicPose4D significantly improves the accuracy and consistency of 4D content generation, outperforming existing methods in various benchmarks.
arXiv Detail & Related papers (2024-05-22T21:51:01Z) - SC4D: Sparse-Controlled Video-to-4D Generation and Motion Transfer [57.506654943449796]
We propose an efficient, sparse-controlled video-to-4D framework named SC4D that decouples motion and appearance.
Our method surpasses existing methods in both quality and efficiency.
We devise a novel application that seamlessly transfers motion onto a diverse array of 4D entities.
arXiv Detail & Related papers (2024-04-04T18:05:18Z) - Motion2VecSets: 4D Latent Vector Set Diffusion for Non-rigid Shape Reconstruction and Tracking [52.393359791978035]
Motion2VecSets is a 4D diffusion model for dynamic surface reconstruction from point cloud sequences.
We parameterize 4D dynamics with latent sets instead of using global latent codes.
For more temporally-coherent object tracking, we synchronously denoise deformation latent sets and exchange information across multiple frames.
arXiv Detail & Related papers (2024-01-12T15:05:08Z) - LoRD: Local 4D Implicit Representation for High-Fidelity Dynamic Human
Modeling [69.56581851211841]
We propose a novel Local 4D implicit Representation for Dynamic clothed human, named LoRD.
Our key insight is to encourage the network to learn the latent codes of local part-level representation.
LoRD has strong capability for representing 4D human, and outperforms state-of-the-art methods on practical applications.
arXiv Detail & Related papers (2022-08-18T03:49:44Z) - Unbiased 4D: Monocular 4D Reconstruction with a Neural Deformation Model [76.64071133839862]
Capturing general deforming scenes from monocular RGB video is crucial for many computer graphics and vision applications.
Our method, Ub4D, handles large deformations, performs shape completion in occluded regions, and can operate on monocular RGB videos directly by using differentiable volume rendering.
Results on our new dataset, which will be made publicly available, demonstrate a clear improvement over the state of the art in terms of surface reconstruction accuracy and robustness to large deformations.
arXiv Detail & Related papers (2022-06-16T17:59:54Z) - H4D: Human 4D Modeling by Learning Neural Compositional Representation [75.34798886466311]
This work presents a novel framework that can effectively learn a compact and compositional representation for dynamic human.
A simple yet effective linear motion model is proposed to provide a rough and regularized motion estimation.
Experiments demonstrate our method is not only efficacy in recovering dynamic human with accurate motion and detailed geometry, but also amenable to various 4D human related tasks.
arXiv Detail & Related papers (2022-03-02T17:10:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.