Deformation-Aware 3D Model Embedding and Retrieval
- URL: http://arxiv.org/abs/2004.01228v3
- Date: Fri, 31 Jul 2020 05:10:25 GMT
- Title: Deformation-Aware 3D Model Embedding and Retrieval
- Authors: Mikaela Angelina Uy and Jingwei Huang and Minhyuk Sung and Tolga
Birdal and Leonidas Guibas
- Abstract summary: We introduce a new problem of retrieving 3D models that are deformable to a given query shape.
We propose a novel deep embedding approach that learns the asymmetric relationships by leveraging location-dependent egocentric distance fields.
- Score: 37.538109895618156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a new problem of retrieving 3D models that are deformable to a
given query shape and present a novel deep deformation-aware embedding to solve
this retrieval task. 3D model retrieval is a fundamental operation for
recovering a clean and complete 3D model from a noisy and partial 3D scan.
However, given a finite collection of 3D shapes, even the closest model to a
query may not be satisfactory. This motivates us to apply 3D model deformation
techniques to adapt the retrieved model so as to better fit the query. Yet,
certain restrictions are enforced in most 3D deformation techniques to preserve
important features of the original model that prevent a perfect fitting of the
deformed model to the query. This gap between the deformed model and the query
induces asymmetric relationships among the models, which cannot be handled by
typical metric learning techniques. Thus, to retrieve the best models for
fitting, we propose a novel deep embedding approach that learns the asymmetric
relationships by leveraging location-dependent egocentric distance fields. We
also propose two strategies for training the embedding network. We demonstrate
that both of these approaches outperform other baselines in our experiments
with both synthetic and real data. Our project page can be found at
https://deformscan2cad.github.io/.
Related papers
- HoloDiffusion: Training a 3D Diffusion Model using 2D Images [71.1144397510333]
We introduce a new diffusion setup that can be trained, end-to-end, with only posed 2D images for supervision.
We show that our diffusion models are scalable, train robustly, and are competitive in terms of sample quality and fidelity to existing approaches for 3D generative modeling.
arXiv Detail & Related papers (2023-03-29T07:35:56Z) - Disentangled3D: Learning a 3D Generative Model with Disentangled
Geometry and Appearance from Monocular Images [94.49117671450531]
State-of-the-art 3D generative models are GANs which use neural 3D volumetric representations for synthesis.
In this paper, we design a 3D GAN which can learn a disentangled model of objects, just from monocular observations.
arXiv Detail & Related papers (2022-03-29T22:03:18Z) - Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian [58.704089101826774]
We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images.
arXiv Detail & Related papers (2022-03-29T04:57:18Z) - Probabilistic Modeling for Human Mesh Recovery [73.11532990173441]
This paper focuses on the problem of 3D human reconstruction from 2D evidence.
We recast the problem as learning a mapping from the input to a distribution of plausible 3D poses.
arXiv Detail & Related papers (2021-08-26T17:55:11Z) - Joint Learning of 3D Shape Retrieval and Deformation [43.359465703912676]
We propose a novel technique for producing high-quality 3D models that match a given target object image or scan.
Our method is based on retrieving an existing shape from a database of 3D models and then deforming its parts to match the target shape.
arXiv Detail & Related papers (2021-01-19T22:49:41Z) - Building 3D Morphable Models from a Single Scan [3.472931603805115]
We propose a method for constructing generative models of 3D objects from a single 3D mesh.
Our method produces a 3D morphable model that represents shape and albedo in terms of Gaussian processes.
We show that our approach can be used to perform face recognition using only a single 3D scan.
arXiv Detail & Related papers (2020-11-24T23:08:14Z) - 3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous
Image Data [77.57798334776353]
We consider the problem of obtaining dense 3D reconstructions of humans from single and partially occluded views.
We suggest that ambiguities can be modelled more effectively by parametrizing the possible body shapes and poses.
We show that our method outperforms alternative approaches in ambiguous pose recovery on standard benchmarks for 3D humans.
arXiv Detail & Related papers (2020-11-02T13:55:31Z) - Towards General Purpose Geometry-Preserving Single-View Depth Estimation [1.9573380763700712]
Single-view depth estimation (SVDE) plays a crucial role in scene understanding for AR applications, 3D modeling, and robotics.
Recent works have shown that a successful solution strongly relies on the diversity and volume of training data.
Our work shows that a model trained on this data along with conventional datasets can gain accuracy while predicting correct scene geometry.
arXiv Detail & Related papers (2020-09-25T20:06:13Z) - Shape Prior Deformation for Categorical 6D Object Pose and Size
Estimation [62.618227434286]
We present a novel learning approach to recover the 6D poses and sizes of unseen object instances from an RGB-D image.
We propose a deep network to reconstruct the 3D object model by explicitly modeling the deformation from a pre-learned categorical shape prior.
arXiv Detail & Related papers (2020-07-16T16:45:05Z) - PolyGen: An Autoregressive Generative Model of 3D Meshes [22.860421649320287]
We present an approach which models the mesh directly using a Transformer-based architecture.
Our model can condition on a range of inputs, including object classes, voxels, and images.
We show that the model is capable of producing high-quality, usable meshes, and establish log-likelihood benchmarks for the mesh-modelling task.
arXiv Detail & Related papers (2020-02-23T17:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.