DSG-Net: Learning Disentangled Structure and Geometry for 3D Shape
Generation
- URL: http://arxiv.org/abs/2008.05440v4
- Date: Sat, 28 May 2022 17:40:15 GMT
- Title: DSG-Net: Learning Disentangled Structure and Geometry for 3D Shape
Generation
- Authors: Jie Yang, Kaichun Mo, Yu-Kun Lai, Leonidas J. Guibas, Lin Gao
- Abstract summary: We introduce DSG-Net, a deep neural network that learns a disentangled structured and geometric mesh representation for 3D shapes.
This supports a range of novel shape generation applications with disentangled control, such as of structure (geometry) while keeping geometry (structure) unchanged.
Our method not only supports controllable generation applications but also produces high-quality synthesized shapes, outperforming state-of-the-art methods.
- Score: 98.96086261213578
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: D shape generation is a fundamental operation in computer graphics. While
significant progress has been made, especially with recent deep generative
models, it remains a challenge to synthesize high-quality shapes with rich
geometric details and complex structure, in a controllable manner. To tackle
this, we introduce DSG-Net, a deep neural network that learns a disentangled
structured and geometric mesh representation for 3D shapes, where two key
aspects of shapes, geometry, and structure, are encoded in a synergistic manner
to ensure plausibility of the generated shapes, while also being disentangled
as much as possible. This supports a range of novel shape generation
applications with disentangled control, such as interpolation of structure
(geometry) while keeping geometry (structure) unchanged. To achieve this, we
simultaneously learn structure and geometry through variational autoencoders
(VAEs) in a hierarchical manner for both, with bijective mappings at each
level. In this manner, we effectively encode geometry and structure in separate
latent spaces, while ensuring their compatibility: the structure is used to
guide the geometry and vice versa. At the leaf level, the part geometry is
represented using a conditional part VAE, to encode high-quality geometric
details, guided by the structure context as the condition. Our method not only
supports controllable generation applications but also produces high-quality
synthesized shapes, outperforming state-of-the-art methods. The code has been
released at https://github.com/IGLICT/DSG-Net.
Related papers
- Geometry-guided Feature Learning and Fusion for Indoor Scene Reconstruction [14.225228781008209]
This paper proposes a novel geometry integration mechanism for 3D scene reconstruction.
Our approach incorporates 3D geometry at three levels, i.e. feature learning, feature fusion, and network supervision.
arXiv Detail & Related papers (2024-08-28T08:02:47Z) - NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - Neural Convolutional Surfaces [59.172308741945336]
This work is concerned with a representation of shapes that disentangles fine, local and possibly repeating geometry, from global, coarse structures.
We show that this approach achieves better neural shape compression than the state of the art, as well as enabling manipulation and transfer of shape details.
arXiv Detail & Related papers (2022-04-05T15:40:11Z) - DECOR-GAN: 3D Shape Detailization by Conditional Refinement [50.8801457082181]
We introduce a deep generative network for 3D shape detailization, akin to stylization with the style being geometric details.
We demonstrate that our method can refine a coarse shape into a variety of detailed shapes with different styles.
arXiv Detail & Related papers (2020-12-16T18:52:10Z) - RISA-Net: Rotation-Invariant Structure-Aware Network for Fine-Grained 3D
Shape Retrieval [46.02391761751015]
Fine-grained 3D shape retrieval aims to retrieve 3D shapes similar to a query shape in a repository with models belonging to the same class.
We introduce a novel deep architecture, RISA-Net, which learns rotation invariant 3D shape descriptors.
Our method is able to learn the importance of geometric and structural information of all the parts when generating the final compact latent feature of a 3D shape.
arXiv Detail & Related papers (2020-10-02T13:06:12Z) - Deep Geometric Texture Synthesis [83.9404865744028]
We propose a novel framework for synthesizing geometric textures.
It learns texture statistics from local neighborhoods of a single reference 3D model.
Our network displaces mesh vertices in any direction, enabling synthesis of geometric textures.
arXiv Detail & Related papers (2020-06-30T19:36:38Z) - STD-Net: Structure-preserving and Topology-adaptive Deformation Network
for 3D Reconstruction from a Single Image [27.885717341244014]
3D reconstruction from a single view image is a long-standing prob-lem in computer vision.
In this paper, we propose a novel methodcalled STD-Net to reconstruct the 3D models utilizing the mesh representation.
Experimental results on the images from ShapeNet show that ourproposed STD-Net has better performance than other state-of-the-art methods onreconstructing 3D objects.
arXiv Detail & Related papers (2020-03-07T11:02:47Z) - Unsupervised Learning of Intrinsic Structural Representation Points [50.92621061405056]
Learning structures of 3D shapes is a fundamental problem in the field of computer graphics and geometry processing.
We present a simple yet interpretable unsupervised method for learning a new structural representation in the form of 3D structure points.
arXiv Detail & Related papers (2020-03-03T17:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.