Latent Partition Implicit with Surface Codes for 3D Representation
- URL: http://arxiv.org/abs/2207.08631v2
- Date: Thu, 21 Jul 2022 02:22:32 GMT
- Title: Latent Partition Implicit with Surface Codes for 3D Representation
- Authors: Chao Chen, Yu-Shen Liu, Zhizhong Han
- Abstract summary: We introduce a novel implicit representation to represent a single 3D shape as a set of parts in the latent space.
We name our method Latent Partition Implicit (LPI), because of its ability of casting the global shape modeling into multiple local part modeling.
- Score: 54.966603013209685
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep implicit functions have shown remarkable shape modeling ability in
various 3D computer vision tasks. One drawback is that it is hard for them to
represent a 3D shape as multiple parts. Current solutions learn various
primitives and blend the primitives directly in the spatial space, which still
struggle to approximate the 3D shape accurately. To resolve this problem, we
introduce a novel implicit representation to represent a single 3D shape as a
set of parts in the latent space, towards both highly accurate and plausibly
interpretable shape modeling. Our insight here is that both the part learning
and the part blending can be conducted much easier in the latent space than in
the spatial space. We name our method Latent Partition Implicit (LPI), because
of its ability of casting the global shape modeling into multiple local part
modeling, which partitions the global shape unity. LPI represents a shape as
Signed Distance Functions (SDFs) using surface codes. Each surface code is a
latent code representing a part whose center is on the surface, which enables
us to flexibly employ intrinsic attributes of shapes or additional surface
properties. Eventually, LPI can reconstruct both the shape and the parts on the
shape, both of which are plausible meshes. LPI is a multi-level representation,
which can partition a shape into different numbers of parts after training. LPI
can be learned without ground truth signed distances, point normals or any
supervision for part partition. LPI outperforms the latest methods under the
widely used benchmarks in terms of reconstruction accuracy and modeling
interpretability. Our code, data and models are available at
https://github.com/chenchao15/LPI.
Related papers
- ShapeClipper: Scalable 3D Shape Learning from Single-View Images via
Geometric and CLIP-based Consistency [39.7058456335011]
We present ShapeClipper, a novel method that reconstructs 3D object shapes from real-world single-view RGB images.
ShapeClipper learns shape reconstruction from a set of single-view segmented images.
We evaluate our method over three challenging real-world datasets.
arXiv Detail & Related papers (2023-04-13T03:53:12Z) - MvDeCor: Multi-view Dense Correspondence Learning for Fine-grained 3D
Segmentation [91.6658845016214]
We propose to utilize self-supervised techniques in the 2D domain for fine-grained 3D shape segmentation tasks.
We render a 3D shape from multiple views, and set up a dense correspondence learning task within the contrastive learning framework.
As a result, the learned 2D representations are view-invariant and geometrically consistent.
arXiv Detail & Related papers (2022-08-18T00:48:15Z) - GIFS: Neural Implicit Function for General Shape Representation [23.91110763447458]
General Implicit Function for 3D Shape (GIFS) is a novel method to represent general shapes.
Instead of dividing 3D space into predefined inside-outside regions, GIFS encodes whether two points are separated by any surface.
Experiments on ShapeNet show that GIFS outperforms previous state-of-the-art methods in terms of reconstruction quality, rendering efficiency, and visual fidelity.
arXiv Detail & Related papers (2022-04-14T17:29:20Z) - Neural Vector Fields for Implicit Surface Representation and Inference [73.25812045209001]
Implicit fields have recently shown increasing success in representing and learning 3D shapes accurately.
We develop a novel and yet a fundamental representation considering unit vectors in 3D space and call it Vector Field (VF)
We show the advantages of VF representation, in learning open, closed, or multi-layered as well as piecewise planar surfaces.
arXiv Detail & Related papers (2022-04-13T17:53:34Z) - Representing 3D Shapes with Probabilistic Directed Distance Fields [7.528141488548544]
We develop a novel shape representation that allows fast differentiable rendering within an implicit architecture.
We show how to model inherent discontinuities in the underlying field.
We also apply our method to fitting single shapes, unpaired 3D-aware generative image modelling, and single-image 3D reconstruction tasks.
arXiv Detail & Related papers (2021-12-10T02:15:47Z) - SP-GAN: Sphere-Guided 3D Shape Generation and Manipulation [50.53931728235875]
We present SP-GAN, a new unsupervised sphere-guided generative model for direct synthesis of 3D shapes in the form of point clouds.
Compared with existing models, SP-GAN is able to synthesize diverse and high-quality shapes with fine details.
arXiv Detail & Related papers (2021-08-10T06:49:45Z) - A-SDF: Learning Disentangled Signed Distance Functions for Articulated
Shape Representation [62.517760545209065]
We introduce Articulated Signed Distance Functions (A-SDF) to represent articulated shapes with a disentangled latent space.
We demonstrate our model generalize well to out-of-distribution and unseen data, e.g., partial point clouds and real-world depth images.
arXiv Detail & Related papers (2021-04-15T17:53:54Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z) - Local Implicit Grid Representations for 3D Scenes [24.331110387905962]
We introduce Local Implicit Grid Representations, a new 3D shape representation designed for scalability and generality.
We train an autoencoder to learn an embedding of local crops of 3D shapes at that size.
Then, we use the decoder as a component in a shape optimization that solves for a set of latent codes on a regular grid of overlapping crops.
arXiv Detail & Related papers (2020-03-19T18:58:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.