Extending DeepSDF for automatic 3D shape retrieval and similarity
transform estimation
- URL: http://arxiv.org/abs/2004.09048v3
- Date: Mon, 26 Oct 2020 05:01:44 GMT
- Title: Extending DeepSDF for automatic 3D shape retrieval and similarity
transform estimation
- Authors: Oladapo Afolabi, Allen Y. Yang, S. Shankar Sastry
- Abstract summary: Recent advances in computer graphics and computer vision have found successful application of deep neural network models for 3D shapes.
We present a formulation to overcome this issue by jointly estimating shape and similarity transform parameters.
- Score: 3.8213230386700614
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in computer graphics and computer vision have found
successful application of deep neural network models for 3D shapes based on
signed distance functions (SDFs) that are useful for shape representation,
retrieval, and completion. However, this approach has been limited by the need
to have query shapes in the same canonical scale and pose as those observed
during training, restricting its effectiveness on real world scenes. We present
a formulation to overcome this issue by jointly estimating shape and similarity
transform parameters. We conduct experiments to demonstrate the effectiveness
of this formulation on synthetic and real datasets and report favorable
comparisons to the state of the art. Finally, we also emphasize the viability
of this approach as a form of 3D model compression.
Related papers
- VortSDF: 3D Modeling with Centroidal Voronoi Tesselation on Signed Distance Field [5.573454319150408]
We introduce a volumetric optimization framework that combines explicit SDF fields with a shallow color network, in order to estimate 3D shape properties over tetrahedral grids.
Experimental results with Chamfer statistics validate this approach with unprecedented reconstruction quality on various scenarios such as objects, open scenes or human.
arXiv Detail & Related papers (2024-07-29T09:46:39Z) - Zero123-6D: Zero-shot Novel View Synthesis for RGB Category-level 6D Pose Estimation [66.3814684757376]
This work presents Zero123-6D, the first work to demonstrate the utility of Diffusion Model-based novel-view-synthesizers in enhancing RGB 6D pose estimation at category-level.
The outlined method shows reduction in data requirements, removal of the necessity of depth information in zero-shot category-level 6D pose estimation task, and increased performance, quantitatively demonstrated through experiments on the CO3D dataset.
arXiv Detail & Related papers (2024-03-21T10:38:18Z) - SC-Diff: 3D Shape Completion with Latent Diffusion Models [4.913210912019975]
This paper introduces a 3D shape completion approach using a 3D latent diffusion model optimized for completing shapes.
Our method combines image-based conditioning through cross-attention and spatial conditioning through the integration of 3D features from captured partial scans.
arXiv Detail & Related papers (2024-03-19T06:01:11Z) - Volumetric Semantically Consistent 3D Panoptic Mapping [77.13446499924977]
We introduce an online 2D-to-3D semantic instance mapping algorithm aimed at generating semantic 3D maps suitable for autonomous agents in unstructured environments.
It introduces novel ways of integrating semantic prediction confidence during mapping, producing semantic and instance-consistent 3D regions.
The proposed method achieves accuracy superior to the state of the art on public large-scale datasets, improving on a number of widely used metrics.
arXiv Detail & Related papers (2023-09-26T08:03:10Z) - Towards Confidence-guided Shape Completion for Robotic Applications [6.940242990198]
Deep learning has begun taking traction as effective means of inferring a complete 3D object representation from partial visual data.
We propose an object shape completion method based on an implicit 3D representation providing a confidence value for each reconstructed point.
We experimentally validate our approach by comparing reconstructed shapes with ground truths, and by deploying our shape completion algorithm in a robotic grasping pipeline.
arXiv Detail & Related papers (2022-09-09T13:48:24Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - Scene Synthesis via Uncertainty-Driven Attribute Synchronization [52.31834816911887]
This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes.
Our method combines the strength of both neural network-based and conventional scene synthesis approaches.
arXiv Detail & Related papers (2021-08-30T19:45:07Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Synthetic Training for Accurate 3D Human Pose and Shape Estimation in
the Wild [27.14060158187953]
This paper addresses the problem of monocular 3D human shape and pose estimation from an RGB image.
We propose STRAPS, a system that uses proxy representations, such as silhouettes and 2D joints, as inputs to a shape and pose regression neural network.
We show that STRAPS outperforms other state-of-the-art methods on SSP-3D in terms of shape prediction accuracy.
arXiv Detail & Related papers (2020-09-21T16:39:04Z) - Shape Prior Deformation for Categorical 6D Object Pose and Size
Estimation [62.618227434286]
We present a novel learning approach to recover the 6D poses and sizes of unseen object instances from an RGB-D image.
We propose a deep network to reconstruct the 3D object model by explicitly modeling the deformation from a pre-learned categorical shape prior.
arXiv Detail & Related papers (2020-07-16T16:45:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.