Neural Star Domain as Primitive Representation
- URL: http://arxiv.org/abs/2010.11248v2
- Date: Thu, 12 Nov 2020 14:22:27 GMT
- Title: Neural Star Domain as Primitive Representation
- Authors: Yuki Kawana, Yusuke Mukuta, Tatsuya Harada
- Abstract summary: We propose a novel primitive representation named neural star domain (NSD) that learns primitive shapes in the star domain.
NSD is a universal approximator of the star domain and is not only parsimonious and semantic but also an implicit and explicit shape representation.
We demonstrate that our approach outperforms existing methods in image reconstruction tasks, semantic capabilities, and speed and quality of sampling high-resolution meshes.
- Score: 65.7313602687861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstructing 3D objects from 2D images is a fundamental task in computer
vision. Accurate structured reconstruction by parsimonious and semantic
primitive representation further broadens its application. When reconstructing
a target shape with multiple primitives, it is preferable that one can
instantly access the union of basic properties of the shape such as collective
volume and surface, treating the primitives as if they are one single shape.
This becomes possible by primitive representation with unified implicit and
explicit representations. However, primitive representations in current
approaches do not satisfy all of the above requirements at the same time. To
solve this problem, we propose a novel primitive representation named neural
star domain (NSD) that learns primitive shapes in the star domain. We show that
NSD is a universal approximator of the star domain and is not only parsimonious
and semantic but also an implicit and explicit shape representation. We
demonstrate that our approach outperforms existing methods in image
reconstruction tasks, semantic capabilities, and speed and quality of sampling
high-resolution meshes.
Related papers
- DeFormer: Integrating Transformers with Deformable Models for 3D Shape
Abstraction from a Single Image [31.154786931081087]
We propose a novel bi-channel Transformer architecture, integrated with parameterized deformable models, to simultaneously estimate the global and local deformations of primitives.
DeFormer achieves better reconstruction accuracy over the state-of-the-art, and visualizes with consistent semantic correspondences for improved interpretability.
arXiv Detail & Related papers (2023-09-22T02:46:43Z) - Neural Vector Fields: Implicit Representation by Explicit Learning [63.337294707047036]
We propose a novel 3D representation method, Neural Vector Fields (NVF)
It not only adopts the explicit learning process to manipulate meshes directly, but also the implicit representation of unsigned distance functions (UDFs)
Our method first predicts displacement queries towards the surface and models shapes as text reconstructions.
arXiv Detail & Related papers (2023-03-08T02:36:09Z) - Learnable Triangulation for Deep Learning-based 3D Reconstruction of
Objects of Arbitrary Topology from Single RGB Images [12.693545159861857]
We propose a novel deep reinforcement learning-based approach for 3D object reconstruction from monocular images.
The proposed method outperforms the state-of-the-art in terms of visual quality, reconstruction accuracy, and computational time.
arXiv Detail & Related papers (2021-09-24T09:44:22Z) - 3DIAS: 3D Shape Reconstruction with Implicit Algebraic Surfaces [45.18497913809082]
Primitive-based representations approximate a 3D shape mainly by a set of simple implicit primitives.
We propose a constrained implicit algebraic surface as the primitive with few learnable coefficients and higher geometrical complexities.
Our method can semantically learn segments of 3D shapes in an unsupervised manner.
arXiv Detail & Related papers (2021-08-19T12:34:28Z) - GENESIS-V2: Inferring Unordered Object Representations without Iterative
Refinement [26.151968529063762]
We develop a new model, GENESIS-V2, which can infer a variable number of object representations without using RNNs or iterative refinement.
We show that GENESIS-V2 outperforms previous methods for unsupervised image segmentation and object-centric scene generation on established synthetic datasets.
arXiv Detail & Related papers (2021-04-20T14:59:27Z) - Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible
Neural Networks [118.20778308823779]
We present a novel 3D primitive representation that defines primitives using an Invertible Neural Network (INN)
Our model learns to parse 3D objects into semantically consistent part arrangements without any part-level supervision.
arXiv Detail & Related papers (2021-03-18T17:59:31Z) - ShaRF: Shape-conditioned Radiance Fields from a Single View [54.39347002226309]
We present a method for estimating neural scenes representations of objects given only a single image.
The core of our method is the estimation of a geometric scaffold for the object.
We demonstrate in several experiments the effectiveness of our approach in both synthetic and real images.
arXiv Detail & Related papers (2021-02-17T16:40:28Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Continuous Surface Embeddings [76.86259029442624]
We focus on the task of learning and representing dense correspondences in deformable object categories.
We propose a new, learnable image-based representation of dense correspondences.
We demonstrate that the proposed approach performs on par or better than the state-of-the-art methods for dense pose estimation for humans.
arXiv Detail & Related papers (2020-11-24T22:52:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.