The Whole Is Greater Than the Sum of Its Nonrigid Parts
- URL: http://arxiv.org/abs/2001.09650v1
- Date: Mon, 27 Jan 2020 09:48:01 GMT
- Title: The Whole Is Greater Than the Sum of Its Nonrigid Parts
- Authors: Oshri Halimi, Ido Imanuel, Or Litany, Giovanni Trappolini, Emanuele
Rodol\`a, Leonidas Guibas, Ron Kimmel
- Abstract summary: We claim that observing part of an object which was previously acquired as a whole, one could deal with both partial matching and shape completion.
We address the problem of matching the part to the whole while simultaneously reconstructing the new pose from its partial observation.
We demonstrate the practical effectiveness of our model in the applications of single-view deformable shape completion and dense shape correspondence.
- Score: 19.003942423980448
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: According to Aristotle, a philosopher in Ancient Greece, "the whole is
greater than the sum of its parts". This observation was adopted to explain
human perception by the Gestalt psychology school of thought in the twentieth
century. Here, we claim that observing part of an object which was previously
acquired as a whole, one could deal with both partial matching and shape
completion in a holistic manner. More specifically, given the geometry of a
full, articulated object in a given pose, as well as a partial scan of the same
object in a different pose, we address the problem of matching the part to the
whole while simultaneously reconstructing the new pose from its partial
observation. Our approach is data-driven, and takes the form of a Siamese
autoencoder without the requirement of a consistent vertex labeling at
inference time; as such, it can be used on unorganized point clouds as well as
on triangle meshes. We demonstrate the practical effectiveness of our model in
the applications of single-view deformable shape completion and dense shape
correspondence, both on synthetic and real-world geometric data, where we
outperform prior work on these tasks by a large margin.
Related papers
- ShapeMatcher: Self-Supervised Joint Shape Canonicalization,
Segmentation, Retrieval and Deformation [47.94499636697971]
We present ShapeMatcher, a unified self-supervised learning framework for joint shape canonicalization, segmentation, retrieval and deformation.
The key insight of ShapeMaker is the simultaneous training of the four highly-associated processes: canonicalization, segmentation, retrieval, and deformation.
arXiv Detail & Related papers (2023-11-18T15:44:57Z) - U-RED: Unsupervised 3D Shape Retrieval and Deformation for Partial Point
Clouds [84.32525852378525]
We propose U-RED, an Unsupervised shape REtrieval and Deformation pipeline.
It takes an arbitrary object observation as input, typically captured by RGB images or scans, and jointly retrieves and deforms the geometrically similar CAD models.
We show that U-RED surpasses existing state-of-the-art approaches by 47.3%, 16.7% and 31.6% respectively under Chamfer Distance.
arXiv Detail & Related papers (2023-08-11T20:56:05Z) - 3D Shape Perception Integrates Intuitive Physics and
Analysis-by-Synthesis [39.933479524063976]
We propose a framework for 3D shape perception that explains perception in both typical and atypical cases.
Our results suggest that bottom-up deep neural network models are not fully adequate accounts of human shape perception.
arXiv Detail & Related papers (2023-01-09T23:11:41Z) - Single-view 3D Body and Cloth Reconstruction under Complex Poses [37.86174829271747]
We extend existing implicit function-based models to deal with images of humans with arbitrary poses and self-occluded limbs.
We learn an implicit function that maps the input image to a 3D body shape with a low level of detail.
We then learn a displacement map, conditioned on the smoothed surface, which encodes the high-frequency details of the clothes and body.
arXiv Detail & Related papers (2022-05-09T07:34:06Z) - Towards Self-Supervised Category-Level Object Pose and Size Estimation [121.28537953301951]
This work presents a self-supervised framework for category-level object pose and size estimation from a single depth image.
We leverage the geometric consistency residing in point clouds of the same shape for self-supervision.
arXiv Detail & Related papers (2022-03-06T06:02:30Z) - A-SDF: Learning Disentangled Signed Distance Functions for Articulated
Shape Representation [62.517760545209065]
We introduce Articulated Signed Distance Functions (A-SDF) to represent articulated shapes with a disentangled latent space.
We demonstrate our model generalize well to out-of-distribution and unseen data, e.g., partial point clouds and real-world depth images.
arXiv Detail & Related papers (2021-04-15T17:53:54Z) - Cycle4Completion: Unpaired Point Cloud Completion using Cycle
Transformation with Missing Region Coding [57.23678891670394]
We propose two simultaneous cycle transformations between the latent spaces of complete shapes and incomplete ones.
We show that our model with the learned bidirectional geometry correspondence outperforms state-of-the-art unpaired completion methods.
arXiv Detail & Related papers (2021-03-14T03:52:53Z) - Continuous Surface Embeddings [76.86259029442624]
We focus on the task of learning and representing dense correspondences in deformable object categories.
We propose a new, learnable image-based representation of dense correspondences.
We demonstrate that the proposed approach performs on par or better than the state-of-the-art methods for dense pose estimation for humans.
arXiv Detail & Related papers (2020-11-24T22:52:15Z) - Weakly-supervised 3D Shape Completion in the Wild [91.04095516680438]
We address the problem of learning 3D complete shape from unaligned and real-world partial point clouds.
We propose a weakly-supervised method to estimate both 3D canonical shape and 6-DoF pose for alignment, given multiple partial observations.
Experiments on both synthetic and real data show that it is feasible and promising to learn 3D shape completion through large-scale data without shape and pose supervision.
arXiv Detail & Related papers (2020-08-20T17:53:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.