ANISE: Assembly-based Neural Implicit Surface rEconstruction
- URL: http://arxiv.org/abs/2205.13682v2
- Date: Wed, 5 Jul 2023 19:06:55 GMT
- Title: ANISE: Assembly-based Neural Implicit Surface rEconstruction
- Authors: Dmitry Petrov, Matheus Gadelha, Radomir Mech, Evangelos Kalogerakis
- Abstract summary: We present ANISE, a method that reconstructs a 3Dshape from partial observations (images or sparse point clouds)
The shape is formulated as an assembly of neural implicit functions, each representing a different part instance.
We demonstrate that, when performing reconstruction by decoding part representations into implicit functions, our method achieves state-of-the-art part-aware reconstruction results from both images and sparse point clouds.
- Score: 12.745433575962842
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present ANISE, a method that reconstructs a 3D~shape from partial
observations (images or sparse point clouds) using a part-aware neural implicit
shape representation. The shape is formulated as an assembly of neural implicit
functions, each representing a different part instance. In contrast to previous
approaches, the prediction of this representation proceeds in a coarse-to-fine
manner. Our model first reconstructs a structural arrangement of the shape in
the form of geometric transformations of its part instances. Conditioned on
them, the model predicts part latent codes encoding their surface geometry.
Reconstructions can be obtained in two ways: (i) by directly decoding the part
latent codes to part implicit functions, then combining them into the final
shape; or (ii) by using part latents to retrieve similar part instances in a
part database and assembling them in a single shape. We demonstrate that, when
performing reconstruction by decoding part representations into implicit
functions, our method achieves state-of-the-art part-aware reconstruction
results from both images and sparse point clouds.When reconstructing shapes by
assembling parts retrieved from a dataset, our approach significantly
outperforms traditional shape retrieval methods even when significantly
restricting the database size. We present our results in well-known sparse
point cloud reconstruction and single-view reconstruction benchmarks.
Related papers
- Part123: Part-aware 3D Reconstruction from a Single-view Image [54.589723979757515]
Part123 is a novel framework for part-aware 3D reconstruction from a single-view image.
We introduce contrastive learning into a neural rendering framework to learn a part-aware feature space.
A clustering-based algorithm is also developed to automatically derive 3D part segmentation results from the reconstructed models.
arXiv Detail & Related papers (2024-05-27T07:10:21Z) - Learning to generate shape from global-local spectra [0.0]
We build our method on top of recent advances on the so called shape-from-spectrum paradigm.
We consider the spectrum as a natural and ready to use representation to encode variability of the shapes.
Our results confirm the improvement of the proposed approach in comparison to existing and alternative methods.
arXiv Detail & Related papers (2021-08-04T16:39:56Z) - Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible
Neural Networks [118.20778308823779]
We present a novel 3D primitive representation that defines primitives using an Invertible Neural Network (INN)
Our model learns to parse 3D objects into semantically consistent part arrangements without any part-level supervision.
arXiv Detail & Related papers (2021-03-18T17:59:31Z) - RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction [19.535169371240073]
We introduce RfD-Net that jointly detects and reconstructs dense object surfaces directly from point clouds.
We decouple the instance reconstruction into global object localization and local shape prediction.
Our approach consistently outperforms the state-of-the-arts and improves over 11 of mesh IoU in object reconstruction.
arXiv Detail & Related papers (2020-11-30T12:58:05Z) - A Divide et Impera Approach for 3D Shape Reconstruction from Multiple
Views [49.03830902235915]
Estimating the 3D shape of an object from a single or multiple images has gained popularity thanks to the recent breakthroughs powered by deep learning.
This paper proposes to rely on viewpoint variant reconstructions by merging the visible information from the given views.
To validate the proposed method, we perform a comprehensive evaluation on the ShapeNet reference benchmark in terms of relative pose estimation and 3D shape reconstruction.
arXiv Detail & Related papers (2020-11-17T09:59:32Z) - Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from
a Single RGB Image [102.44347847154867]
We propose a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives.
Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives.
Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.
arXiv Detail & Related papers (2020-04-02T17:58:05Z) - Monocular Human Pose and Shape Reconstruction using Part Differentiable
Rendering [53.16864661460889]
Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth.
In this paper, we introduce body segmentation as critical supervision.
To improve the reconstruction with part segmentation, we propose a part-level differentiable part that enables part-based models to be supervised by part segmentation.
arXiv Detail & Related papers (2020-03-24T14:25:46Z) - Convolutional Occupancy Networks [88.48287716452002]
We propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes.
By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space.
We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.
arXiv Detail & Related papers (2020-03-10T10:17:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.