3D Reconstruction of Novel Object Shapes from Single Images
- URL: http://arxiv.org/abs/2006.07752v4
- Date: Wed, 1 Sep 2021 21:11:12 GMT
- Title: 3D Reconstruction of Novel Object Shapes from Single Images
- Authors: Anh Thai, Stefan Stojanov, Vijay Upadhya, James M. Rehg
- Abstract summary: We show that our proposed SDFNet achieves state-of-the-art performance on seen and unseen shapes.
We provide the first large-scale evaluation of single image shape reconstruction to unseen objects.
- Score: 23.016517962380323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurately predicting the 3D shape of any arbitrary object in any pose from a
single image is a key goal of computer vision research. This is challenging as
it requires a model to learn a representation that can infer both the visible
and occluded portions of any object using a limited training set. A training
set that covers all possible object shapes is inherently infeasible. Such
learning-based approaches are inherently vulnerable to overfitting, and
successfully implementing them is a function of both the architecture design
and the training approach. We present an extensive investigation of factors
specific to architecture design, training, experiment design, and evaluation
that influence reconstruction performance and measurement. We show that our
proposed SDFNet achieves state-of-the-art performance on seen and unseen shapes
relative to existing methods GenRe and OccNet. We provide the first large-scale
evaluation of single image shape reconstruction to unseen objects. The source
code, data and trained models can be found on
https://github.com/rehg-lab/3DShapeGen.
Related papers
- EasyHOI: Unleashing the Power of Large Models for Reconstructing Hand-Object Interactions in the Wild [79.71523320368388]
Our work aims to reconstruct hand-object interactions from a single-view image.
We first design a novel pipeline to estimate the underlying hand pose and object shape.
With the initial reconstruction, we employ a prior-guided optimization scheme.
arXiv Detail & Related papers (2024-11-21T16:33:35Z) - Uncertainty-aware 3D Object-Level Mapping with Deep Shape Priors [15.34487368683311]
We propose a framework that can reconstruct high-quality object-level maps for unknown objects.
Our approach takes multiple RGB-D images as input and outputs dense 3D shapes and 9-DoF poses for detected objects.
We derive a probabilistic formulation that propagates shape and pose uncertainty through two novel loss functions.
arXiv Detail & Related papers (2023-09-17T00:48:19Z) - SADIR: Shape-Aware Diffusion Models for 3D Image Reconstruction [2.2954246824369218]
3D image reconstruction from a limited number of 2D images has been a long-standing challenge in computer vision and image analysis.
We propose a shape-aware network based on diffusion models for 3D image reconstruction, named SADIR, to address these issues.
arXiv Detail & Related papers (2023-09-06T19:30:22Z) - A Fusion of Variational Distribution Priors and Saliency Map Replay for Continual 3D Reconstruction [1.2289361708127877]
Single-image 3D reconstruction is a research challenge focused on predicting 3D object shapes from single-view images.
This task requires significant data acquisition to predict both visible and occluded portions of the shape.
We propose a continual learning-based 3D reconstruction method where our goal is to design a model using Variational Priors that can still reconstruct the previously seen classes reasonably even after training on new classes.
arXiv Detail & Related papers (2023-08-17T06:48:55Z) - MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare [84.80956484848505]
MegaPose is a method to estimate the 6D pose of novel objects, that is, objects unseen during training.
We present a 6D pose refiner based on a render&compare strategy which can be applied to novel objects.
Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner.
arXiv Detail & Related papers (2022-12-13T19:30:03Z) - Single-view 3D Mesh Reconstruction for Seen and Unseen Categories [69.29406107513621]
Single-view 3D Mesh Reconstruction is a fundamental computer vision task that aims at recovering 3D shapes from single-view RGB images.
This paper tackles Single-view 3D Mesh Reconstruction, to study the model generalization on unseen categories.
We propose an end-to-end two-stage network, GenMesh, to break the category boundaries in reconstruction.
arXiv Detail & Related papers (2022-08-04T14:13:35Z) - Discovering 3D Parts from Image Collections [98.16987919686709]
We tackle the problem of 3D part discovery from only 2D image collections.
Instead of relying on manually annotated parts for supervision, we propose a self-supervised approach.
Our key insight is to learn a novel part shape prior that allows each part to fit an object shape faithfully while constrained to have simple geometry.
arXiv Detail & Related papers (2021-07-28T20:29:16Z) - From Points to Multi-Object 3D Reconstruction [71.17445805257196]
We propose a method to detect and reconstruct multiple 3D objects from a single RGB image.
A keypoint detector localizes objects as center points and directly predicts all object properties, including 9-DoF bounding boxes and 3D shapes.
The presented approach performs lightweight reconstruction in a single-stage, it is real-time capable, fully differentiable and end-to-end trainable.
arXiv Detail & Related papers (2020-12-21T18:52:21Z) - Shape Prior Deformation for Categorical 6D Object Pose and Size
Estimation [62.618227434286]
We present a novel learning approach to recover the 6D poses and sizes of unseen object instances from an RGB-D image.
We propose a deep network to reconstruct the 3D object model by explicitly modeling the deformation from a pre-learned categorical shape prior.
arXiv Detail & Related papers (2020-07-16T16:45:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.