Canonical Consolidation Fields: Reconstructing Dynamic Shapes from Point Clouds
- URL: http://arxiv.org/abs/2406.18582v1
- Date: Wed, 5 Jun 2024 17:07:55 GMT
- Title: Canonical Consolidation Fields: Reconstructing Dynamic Shapes from Point Clouds
- Authors: Miaowei Wang, Changjian Li, Amir Vaxman,
- Abstract summary: We present Canonical Consolidation Fields (CanFields)
CanFields is a method for reconstructing a time series of independently-sampled point clouds into a single deforming coherent shape.
We demonstrate the robustness and accuracy of our methods on a diverse benchmark of dynamic point clouds.
- Score: 12.221737707194261
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Canonical Consolidation Fields (CanFields): a method for reconstructing a time series of independently-sampled point clouds into a single deforming coherent shape. Such input often comes from motion capture. Existing methods either couple the geometry and the deformation, where by doing so they smooth fine details and lose the ability to track moving points, or they track the deformation explicitly, but introduce topological and geometric artifacts. Our novelty lies in the consolidation of the point clouds into a single canonical shape in a way that reduces the effect of noise and outliers, and enables us to overcome missing regions. We simultaneously reconstruct the velocity fields that guide the deformation. This consolidation allows us to retain the high-frequency details of the geometry, while faithfully reproducing the low-frequency deformation. Our architecture comprises simple components, and fits any single input shape without using datasets. We demonstrate the robustness and accuracy of our methods on a diverse benchmark of dynamic point clouds, including missing regions, sparse frames, and noise.
Related papers
- Non-Rigid Shape Registration via Deep Functional Maps Prior [1.9249120068573227]
We propose a learning-based framework for non-rigid shape registration without correspondence supervision.
We deform source mesh towards the target point cloud, guided by correspondences induced by high-dimensional embeddings.
Our pipeline achieves state-of-the-art results on several benchmarks of non-rigid point cloud matching.
arXiv Detail & Related papers (2023-11-08T06:52:57Z) - PatchRD: Detail-Preserving Shape Completion by Learning Patch Retrieval
and Deformation [59.70430570779819]
We introduce a data-driven shape completion approach that focuses on completing geometric details of missing regions of 3D shapes.
Our key insight is to copy and deform patches from the partial input to complete missing regions.
We leverage repeating patterns by retrieving patches from the partial input, and learn global structural priors by using a neural network to guide the retrieval and deformation steps.
arXiv Detail & Related papers (2022-07-24T18:59:09Z) - IDEA-Net: Dynamic 3D Point Cloud Interpolation via Deep Embedding
Alignment [58.8330387551499]
We formulate the problem as estimation of point-wise trajectories (i.e., smooth curves)
We propose IDEA-Net, an end-to-end deep learning framework, which disentangles the problem under the assistance of the explicitly learned temporal consistency.
We demonstrate the effectiveness of our method on various point cloud sequences and observe large improvement over state-of-the-art methods both quantitatively and visually.
arXiv Detail & Related papers (2022-03-22T10:14:08Z) - Implicit field supervision for robust non-rigid shape matching [29.7672368261038]
Establishing a correspondence between two non-rigidly deforming shapes is one of the most fundamental problems in visual computing.
We introduce an approach based on auto-decoder framework, that learns a continuous shape-wise deformation field over a fixed template.
Our method is remarkably robust in the presence of strong artefacts and can be generalised to arbitrary shape categories.
arXiv Detail & Related papers (2022-03-15T07:22:52Z) - EditVAE: Unsupervised Part-Aware Controllable 3D Point Cloud Shape
Generation [19.817166425038753]
This paper tackles the problem of parts-aware point cloud generation.
A simple modification of the Variational Auto-Encoder yields a joint model of the point cloud itself.
In addition to the flexibility afforded by our disentangled representation, the inductive bias introduced by our joint modelling approach yields the state-of-the-art experimental results on the ShapeNet dataset.
arXiv Detail & Related papers (2021-10-13T12:38:01Z) - Weakly-supervised 3D Shape Completion in the Wild [91.04095516680438]
We address the problem of learning 3D complete shape from unaligned and real-world partial point clouds.
We propose a weakly-supervised method to estimate both 3D canonical shape and 6-DoF pose for alignment, given multiple partial observations.
Experiments on both synthetic and real data show that it is feasible and promising to learn 3D shape completion through large-scale data without shape and pose supervision.
arXiv Detail & Related papers (2020-08-20T17:53:42Z) - Learning non-rigid surface reconstruction from spatio-temporal image
patches [0.0]
We present a method to reconstruct a dense-temporal depth map of a deformable object from a video sequence.
The estimation of depth is performed locally on non-temporal patches of the video, and the full depth video of entire shape is recovered by combining them together.
We tested our method on both synthetic and Kinect data and experimentally observed that the reconstruction error is significantly lower than the one obtained using other approaches like conventional non-rigid structure.
arXiv Detail & Related papers (2020-06-18T20:25:15Z) - Dense Non-Rigid Structure from Motion: A Manifold Viewpoint [162.88686222340962]
Non-Rigid Structure-from-Motion (NRSfM) problem aims to recover 3D geometry of a deforming object from its 2D feature correspondences across multiple frames.
We show that our approach significantly improves accuracy, scalability, and robustness against noise.
arXiv Detail & Related papers (2020-06-15T09:15:54Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z) - Shape-Oriented Convolution Neural Network for Point Cloud Analysis [59.405388577930616]
Point cloud is a principal data structure adopted for 3D geometric information encoding.
Shape-oriented message passing scheme dubbed ShapeConv is proposed to focus on the representation learning of the underlying shape formed by each local neighboring point.
arXiv Detail & Related papers (2020-04-20T16:11:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.