Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible
Neural Networks
- URL: http://arxiv.org/abs/2103.10429v1
- Date: Thu, 18 Mar 2021 17:59:31 GMT
- Title: Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible
Neural Networks
- Authors: Despoina Paschalidou and Angelos Katharopoulos and Andreas Geiger and
Sanja Fidler
- Abstract summary: We present a novel 3D primitive representation that defines primitives using an Invertible Neural Network (INN)
Our model learns to parse 3D objects into semantically consistent part arrangements without any part-level supervision.
- Score: 118.20778308823779
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Impressive progress in 3D shape extraction led to representations that can
capture object geometries with high fidelity. In parallel, primitive-based
methods seek to represent objects as semantically consistent part arrangements.
However, due to the simplicity of existing primitive representations, these
methods fail to accurately reconstruct 3D shapes using a small number of
primitives/parts. We address the trade-off between reconstruction quality and
number of parts with Neural Parts, a novel 3D primitive representation that
defines primitives using an Invertible Neural Network (INN) which implements
homeomorphic mappings between a sphere and the target object. The INN allows us
to compute the inverse mapping of the homeomorphism, which in turn, enables the
efficient computation of both the implicit surface function of a primitive and
its mesh, without any additional post-processing. Our model learns to parse 3D
objects into semantically consistent part arrangements without any part-level
supervision. Evaluations on ShapeNet, D-FAUST and FreiHAND demonstrate that our
primitives can capture complex geometries and thus simultaneously achieve
geometrically accurate as well as interpretable reconstructions using an order
of magnitude fewer primitives than state-of-the-art shape abstraction methods.
Related papers
- Neural Template: Topology-aware Reconstruction and Disentangled
Generation of 3D Meshes [52.038346313823524]
This paper introduces a novel framework called DTNet for 3D mesh reconstruction and generation via Disentangled Topology.
Our method is able to produce high-quality meshes, particularly with diverse topologies, as compared with the state-of-the-art methods.
arXiv Detail & Related papers (2022-06-10T08:32:57Z) - ANISE: Assembly-based Neural Implicit Surface rEconstruction [12.745433575962842]
We present ANISE, a method that reconstructs a 3Dshape from partial observations (images or sparse point clouds)
The shape is formulated as an assembly of neural implicit functions, each representing a different part instance.
We demonstrate that, when performing reconstruction by decoding part representations into implicit functions, our method achieves state-of-the-art part-aware reconstruction results from both images and sparse point clouds.
arXiv Detail & Related papers (2022-05-27T00:01:40Z) - Learnable Triangulation for Deep Learning-based 3D Reconstruction of
Objects of Arbitrary Topology from Single RGB Images [12.693545159861857]
We propose a novel deep reinforcement learning-based approach for 3D object reconstruction from monocular images.
The proposed method outperforms the state-of-the-art in terms of visual quality, reconstruction accuracy, and computational time.
arXiv Detail & Related papers (2021-09-24T09:44:22Z) - 3DIAS: 3D Shape Reconstruction with Implicit Algebraic Surfaces [45.18497913809082]
Primitive-based representations approximate a 3D shape mainly by a set of simple implicit primitives.
We propose a constrained implicit algebraic surface as the primitive with few learnable coefficients and higher geometrical complexities.
Our method can semantically learn segments of 3D shapes in an unsupervised manner.
arXiv Detail & Related papers (2021-08-19T12:34:28Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Neural Star Domain as Primitive Representation [65.7313602687861]
We propose a novel primitive representation named neural star domain (NSD) that learns primitive shapes in the star domain.
NSD is a universal approximator of the star domain and is not only parsimonious and semantic but also an implicit and explicit shape representation.
We demonstrate that our approach outperforms existing methods in image reconstruction tasks, semantic capabilities, and speed and quality of sampling high-resolution meshes.
arXiv Detail & Related papers (2020-10-21T19:05:16Z) - Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from
a Single RGB Image [102.44347847154867]
We propose a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives.
Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives.
Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.
arXiv Detail & Related papers (2020-04-02T17:58:05Z) - Convolutional Occupancy Networks [88.48287716452002]
We propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes.
By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space.
We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.
arXiv Detail & Related papers (2020-03-10T10:17:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.