Geodesic-HOF: 3D Reconstruction Without Cutting Corners
- URL: http://arxiv.org/abs/2006.07981v1
- Date: Sun, 14 Jun 2020 18:59:06 GMT
- Title: Geodesic-HOF: 3D Reconstruction Without Cutting Corners
- Authors: Ziyun Wang, Eric A. Mitchell, Volkan Isler, Daniel D. Lee
- Abstract summary: Single-view 3D object reconstruction is a challenging fundamental problem in computer vision.
We learn an image-conditioned mapping function from a canonical sampling domain to a high dimensional space.
We find that this learned geodesic embedding space provides useful information for applications such as unsupervised object decomposition.
- Score: 42.4960665928525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single-view 3D object reconstruction is a challenging fundamental problem in
computer vision, largely due to the morphological diversity of objects in the
natural world. In particular, high curvature regions are not always captured
effectively by methods trained using only set-based loss functions, resulting
in reconstructions short-circuiting the surface or cutting corners. In
particular, high curvature regions are not always captured effectively by
methods trained using only set-based loss functions, resulting in
reconstructions short-circuiting the surface or cutting corners. To address
this issue, we propose learning an image-conditioned mapping function from a
canonical sampling domain to a high dimensional space where the Euclidean
distance is equal to the geodesic distance on the object. The first three
dimensions of a mapped sample correspond to its 3D coordinates. The additional
lifted components contain information about the underlying geodesic structure.
Our results show that taking advantage of these learned lifted coordinates
yields better performance for estimating surface normals and generating
surfaces than using point cloud reconstructions alone. Further, we find that
this learned geodesic embedding space provides useful information for
applications such as unsupervised object decomposition.
Related papers
- Normal-guided Detail-Preserving Neural Implicit Functions for High-Fidelity 3D Surface Reconstruction [6.4279213810512665]
Current methods for learning neural implicit representations from RGB or RGBD images produce 3D surfaces with missing parts and details.
This paper demonstrates that training neural representations with first-order differential properties, i.e. surface normals, leads to highly accurate 3D surface reconstruction.
arXiv Detail & Related papers (2024-06-07T11:48:47Z) - LISR: Learning Linear 3D Implicit Surface Representation Using Compactly
Supported Radial Basis Functions [5.056545768004376]
Implicit 3D surface reconstruction of an object from its partial and noisy 3D point cloud scan is the classical geometry processing and 3D computer vision problem.
We propose a neural network architecture for learning the linear implicit shape representation of the 3D surface of an object.
The proposed approach achieves better Chamfer distance and comparable F-score than the state-of-the-art approach on the benchmark dataset.
arXiv Detail & Related papers (2024-02-11T20:42:49Z) - LIST: Learning Implicitly from Spatial Transformers for Single-View 3D
Reconstruction [5.107705550575662]
List is a novel neural architecture that leverages local and global image features to reconstruct geometric and topological structure of a 3D object from a single image.
We show the superiority of our model in reconstructing 3D objects from both synthetic and real-world images against the state of the art.
arXiv Detail & Related papers (2023-07-23T01:01:27Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - GeoUDF: Surface Reconstruction from 3D Point Clouds via Geometry-guided
Distance Representation [73.77505964222632]
We present a learning-based method, namely GeoUDF, to tackle the problem of reconstructing a discrete surface from a sparse point cloud.
To be specific, we propose a geometry-guided learning method for UDF and its gradient estimation.
To extract triangle meshes from the predicted UDF, we propose a customized edge-based marching cube module.
arXiv Detail & Related papers (2022-11-30T06:02:01Z) - Single-view 3D Mesh Reconstruction for Seen and Unseen Categories [69.29406107513621]
Single-view 3D Mesh Reconstruction is a fundamental computer vision task that aims at recovering 3D shapes from single-view RGB images.
This paper tackles Single-view 3D Mesh Reconstruction, to study the model generalization on unseen categories.
We propose an end-to-end two-stage network, GenMesh, to break the category boundaries in reconstruction.
arXiv Detail & Related papers (2022-08-04T14:13:35Z) - RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction [19.535169371240073]
We introduce RfD-Net that jointly detects and reconstructs dense object surfaces directly from point clouds.
We decouple the instance reconstruction into global object localization and local shape prediction.
Our approach consistently outperforms the state-of-the-arts and improves over 11 of mesh IoU in object reconstruction.
arXiv Detail & Related papers (2020-11-30T12:58:05Z) - Geo-PIFu: Geometry and Pixel Aligned Implicit Functions for Single-view
Human Reconstruction [97.3274868990133]
Geo-PIFu is a method to recover a 3D mesh from a monocular color image of a clothed person.
We show that, by both encoding query points and constraining global shape using latent voxel features, the reconstruction we obtain for clothed human meshes exhibits less shape distortion and improved surface details compared to competing methods.
arXiv Detail & Related papers (2020-06-15T01:11:48Z) - Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from
a Single RGB Image [102.44347847154867]
We propose a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives.
Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives.
Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.
arXiv Detail & Related papers (2020-04-02T17:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.