Canonical Consolidation Fields: Reconstructing Dynamic Shapes from Point Clouds
- URL: http://arxiv.org/abs/2406.18582v1
- Date: Wed, 5 Jun 2024 17:07:55 GMT
- Title: Canonical Consolidation Fields: Reconstructing Dynamic Shapes from Point Clouds
- Authors: Miaowei Wang, Changjian Li, Amir Vaxman,
- Abstract summary: We present Canonical Consolidation Fields (CanFields)
CanFields is a method for reconstructing a time series of independently-sampled point clouds into a single deforming coherent shape.
We demonstrate the robustness and accuracy of our methods on a diverse benchmark of dynamic point clouds.
- Score: 12.221737707194261
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Canonical Consolidation Fields (CanFields): a method for reconstructing a time series of independently-sampled point clouds into a single deforming coherent shape. Such input often comes from motion capture. Existing methods either couple the geometry and the deformation, where by doing so they smooth fine details and lose the ability to track moving points, or they track the deformation explicitly, but introduce topological and geometric artifacts. Our novelty lies in the consolidation of the point clouds into a single canonical shape in a way that reduces the effect of noise and outliers, and enables us to overcome missing regions. We simultaneously reconstruct the velocity fields that guide the deformation. This consolidation allows us to retain the high-frequency details of the geometry, while faithfully reproducing the low-frequency deformation. Our architecture comprises simple components, and fits any single input shape without using datasets. We demonstrate the robustness and accuracy of our methods on a diverse benchmark of dynamic point clouds, including missing regions, sparse frames, and noise.
Related papers
- 4DPV: 4D Pet from Videos by Coarse-to-Fine Non-Rigid Radiance Fields [16.278222277579655]
We present a coarse-to-fine neural model to recover the camera pose and the 4D reconstruction of an unknown object from multiple RGB sequences in the wild.
Our approach does not consider any pre-built 3D template nor 3D training data as well as controlled conditions.
We thoroughly validate the method on challenging scenarios with complex and real-world deformations.
arXiv Detail & Related papers (2024-11-15T15:31:58Z) - Deformation-Guided Unsupervised Non-Rigid Shape Matching [7.327850781641328]
We present an unsupervised data-driven approach for non-rigid shape matching.
Our approach is particularly robust when matching digitized shapes using 3D scanners.
arXiv Detail & Related papers (2023-11-27T09:55:55Z) - Explorable Mesh Deformation Subspaces from Unstructured Generative
Models [53.23510438769862]
Deep generative models of 3D shapes often feature continuous latent spaces that can be used to explore potential variations.
We present a method to explore variations among a given set of landmark shapes by constructing a mapping from an easily-navigable 2D exploration space to a subspace of a pre-trained generative model.
arXiv Detail & Related papers (2023-10-11T18:53:57Z) - PatchRD: Detail-Preserving Shape Completion by Learning Patch Retrieval
and Deformation [59.70430570779819]
We introduce a data-driven shape completion approach that focuses on completing geometric details of missing regions of 3D shapes.
Our key insight is to copy and deform patches from the partial input to complete missing regions.
We leverage repeating patterns by retrieving patches from the partial input, and learn global structural priors by using a neural network to guide the retrieval and deformation steps.
arXiv Detail & Related papers (2022-07-24T18:59:09Z) - Animatable Implicit Neural Representations for Creating Realistic
Avatars from Videos [63.16888987770885]
This paper addresses the challenge of reconstructing an animatable human model from a multi-view video.
We introduce a pose-driven deformation field based on the linear blend skinning algorithm.
We show that our approach significantly outperforms recent human modeling methods.
arXiv Detail & Related papers (2022-03-15T17:56:59Z) - Animatable Neural Radiance Fields for Human Body Modeling [54.41477114385557]
This paper addresses the challenge of reconstructing an animatable human model from a multi-view video.
We introduce neural blend weight fields to produce the deformation fields.
Experiments show that our approach significantly outperforms recent human methods.
arXiv Detail & Related papers (2021-05-06T17:58:13Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z) - Dense Non-Rigid Structure from Motion: A Manifold Viewpoint [162.88686222340962]
Non-Rigid Structure-from-Motion (NRSfM) problem aims to recover 3D geometry of a deforming object from its 2D feature correspondences across multiple frames.
We show that our approach significantly improves accuracy, scalability, and robustness against noise.
arXiv Detail & Related papers (2020-06-15T09:15:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.