Learning to Detect 3D Reflection Symmetry for Single-View Reconstruction
- URL: http://arxiv.org/abs/2006.10042v1
- Date: Wed, 17 Jun 2020 17:58:59 GMT
- Title: Learning to Detect 3D Reflection Symmetry for Single-View Reconstruction
- Authors: Yichao Zhou, Shichen Liu, Yi Ma
- Abstract summary: 3D reconstruction from a single RGB image is a challenging problem in computer vision.
Previous methods are usually solely data-driven, which lead to inaccurate 3D shape recovery and limited generalization capability.
We present a geometry-based end-to-end deep learning framework that first detects the mirror plane of reflection symmetry that commonly exists in man-made objects and then predicts depth maps by finding the intra-image pixel-wise correspondence of the symmetry.
- Score: 32.14605731030579
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D reconstruction from a single RGB image is a challenging problem in
computer vision. Previous methods are usually solely data-driven, which lead to
inaccurate 3D shape recovery and limited generalization capability. In this
work, we focus on object-level 3D reconstruction and present a geometry-based
end-to-end deep learning framework that first detects the mirror plane of
reflection symmetry that commonly exists in man-made objects and then predicts
depth maps by finding the intra-image pixel-wise correspondence of the
symmetry. Our method fully utilizes the geometric cues from symmetry during the
test time by building plane-sweep cost volumes, a powerful tool that has been
used in multi-view stereopsis. To our knowledge, this is the first work that
uses the concept of cost volumes in the setting of single-image 3D
reconstruction. We conduct extensive experiments on the ShapeNet dataset and
find that our reconstruction method significantly outperforms the previous
state-of-the-art single-view 3D reconstruction networks in term of the accuracy
of camera poses and depth maps, without requiring objects being completely
symmetric. Code is available at https://github.com/zhou13/symmetrynet.
Related papers
- FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models [67.96827539201071]
We propose a novel test-time optimization approach for 3D scene reconstruction.
Our method achieves state-of-the-art cross-dataset reconstruction on five zero-shot testing datasets.
arXiv Detail & Related papers (2023-08-10T17:55:02Z) - LIST: Learning Implicitly from Spatial Transformers for Single-View 3D
Reconstruction [5.107705550575662]
List is a novel neural architecture that leverages local and global image features to reconstruct geometric and topological structure of a 3D object from a single image.
We show the superiority of our model in reconstructing 3D objects from both synthetic and real-world images against the state of the art.
arXiv Detail & Related papers (2023-07-23T01:01:27Z) - 3D Surface Reconstruction in the Wild by Deforming Shape Priors from
Synthetic Data [24.97027425606138]
Reconstructing the underlying 3D surface of an object from a single image is a challenging problem.
We present a new method for joint category-specific 3D reconstruction and object pose estimation from a single image.
Our approach achieves state-of-the-art reconstruction performance across several real-world datasets.
arXiv Detail & Related papers (2023-02-24T20:37:27Z) - Single-view 3D Mesh Reconstruction for Seen and Unseen Categories [69.29406107513621]
Single-view 3D Mesh Reconstruction is a fundamental computer vision task that aims at recovering 3D shapes from single-view RGB images.
This paper tackles Single-view 3D Mesh Reconstruction, to study the model generalization on unseen categories.
We propose an end-to-end two-stage network, GenMesh, to break the category boundaries in reconstruction.
arXiv Detail & Related papers (2022-08-04T14:13:35Z) - SNeS: Learning Probably Symmetric Neural Surfaces from Incomplete Data [77.53134858717728]
We build on the strengths of recent advances in neural reconstruction and rendering such as Neural Radiance Fields (NeRF)
We apply a soft symmetry constraint to the 3D geometry and material properties, having factored appearance into lighting, albedo colour and reflectivity.
We show that it can reconstruct unobserved regions with high fidelity and render high-quality novel view images.
arXiv Detail & Related papers (2022-06-13T17:37:50Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - Toward Realistic Single-View 3D Object Reconstruction with Unsupervised
Learning from Multiple Images [18.888384816156744]
We propose a novel unsupervised algorithm to learn a 3D reconstruction network from a multi-image dataset.
Our algorithm is more general and covers the symmetry-required scenario as a special case.
Our method surpasses the previous work in both quality and robustness.
arXiv Detail & Related papers (2021-09-06T08:34:04Z) - Hybrid Approach for 3D Head Reconstruction: Using Neural Networks and
Visual Geometry [3.970492757288025]
We present a novel method for reconstructing 3D heads from a single or multiple image(s) using a hybrid approach based on deep learning and geometric techniques.
We propose an encoder-decoder network based on the U-net architecture and trained on synthetic data only.
arXiv Detail & Related papers (2021-04-28T11:31:35Z) - NeRD: Neural 3D Reflection Symmetry Detector [27.626579746101292]
We present NeRD, a Neural 3D Reflection Symmetry Detector.
We first enumerate the symmetry planes with a coarse-to-fine strategy and then find the best ones by building 3D cost volumes.
Our experiments show that the symmetry planes detected with our method are significantly more accurate than the planes from direct CNN regression.
arXiv Detail & Related papers (2021-04-19T17:25:51Z) - From Points to Multi-Object 3D Reconstruction [71.17445805257196]
We propose a method to detect and reconstruct multiple 3D objects from a single RGB image.
A keypoint detector localizes objects as center points and directly predicts all object properties, including 9-DoF bounding boxes and 3D shapes.
The presented approach performs lightweight reconstruction in a single-stage, it is real-time capable, fully differentiable and end-to-end trainable.
arXiv Detail & Related papers (2020-12-21T18:52:21Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.