SP-GAN: Sphere-Guided 3D Shape Generation and Manipulation
- URL: http://arxiv.org/abs/2108.04476v1
- Date: Tue, 10 Aug 2021 06:49:45 GMT
- Title: SP-GAN: Sphere-Guided 3D Shape Generation and Manipulation
- Authors: Ruihui Li, Xianzhi Li, Ka-Hei Hui, Chi-Wing Fu
- Abstract summary: We present SP-GAN, a new unsupervised sphere-guided generative model for direct synthesis of 3D shapes in the form of point clouds.
Compared with existing models, SP-GAN is able to synthesize diverse and high-quality shapes with fine details.
- Score: 50.53931728235875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present SP-GAN, a new unsupervised sphere-guided generative model for
direct synthesis of 3D shapes in the form of point clouds. Compared with
existing models, SP-GAN is able to synthesize diverse and high-quality shapes
with fine details and promote controllability for part-aware shape generation
and manipulation, yet trainable without any parts annotations. In SP-GAN, we
incorporate a global prior (uniform points on a sphere) to spatially guide the
generative process and attach a local prior (a random latent code) to each
sphere point to provide local details. The key insight in our design is to
disentangle the complex 3D shape generation task into a global shape modeling
and a local structure adjustment, to ease the learning process and enhance the
shape generation quality. Also, our model forms an implicit dense
correspondence between the sphere points and points in every generated shape,
enabling various forms of structure-aware shape manipulations such as part
editing, part-wise shape interpolation, and multi-shape part composition, etc.,
beyond the existing generative models. Experimental results, which include both
visual and quantitative evaluations, demonstrate that our model is able to
synthesize diverse point clouds with fine details and less noise, as compared
with the state-of-the-art models.
Related papers
- DetailGen3D: Generative 3D Geometry Enhancement via Data-Dependent Flow [44.72037991063735]
DetailGen3D is a generative approach specifically designed to enhance generated 3D shapes.
Our key insight is to model the coarse-to-fine transformation directly through data-dependent flows in latent space.
We introduce a token matching strategy that ensures accurate spatial correspondence during refinement.
arXiv Detail & Related papers (2024-11-25T17:08:17Z) - NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - T1: Scaling Diffusion Probabilistic Fields to High-Resolution on Unified
Visual Modalities [69.16656086708291]
Diffusion Probabilistic Field (DPF) models the distribution of continuous functions defined over metric spaces.
We propose a new model comprising of a view-wise sampling algorithm to focus on local structure learning.
The model can be scaled to generate high-resolution data while unifying multiple modalities.
arXiv Detail & Related papers (2023-05-24T03:32:03Z) - 3DQD: Generalized Deep 3D Shape Prior via Part-Discretized Diffusion
Process [32.3773514247982]
We develop a generalized 3D shape generation prior model tailored for multiple 3D tasks.
Designs jointly equip our proposed 3D shape prior model with high-fidelity, diverse features as well as the capability of cross-modality alignment.
arXiv Detail & Related papers (2023-03-18T12:50:29Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and
Manipulation [54.09274684734721]
We present a new approach for 3D shape generation, inversion, and manipulation, through a direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
We may jointly train an encoder network to learn a latent space for inverting shapes, allowing us to enable a rich variety of whole-shape and region-aware shape manipulations.
arXiv Detail & Related papers (2023-02-01T02:47:53Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation [52.038346313823524]
This paper presents a new approach for 3D shape generation, enabling direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
arXiv Detail & Related papers (2022-09-19T02:51:48Z) - Learning to Generate 3D Shapes from a Single Example [28.707149807472685]
We present a multi-scale GAN-based model designed to capture the input shape's geometric features across a range of spatial scales.
We train our generative model on a voxel pyramid of the reference shape, without the need of any external supervision or manual annotation.
The resulting shapes present variations across different scales, and at the same time retain the global structure of the reference shape.
arXiv Detail & Related papers (2022-08-05T01:05:32Z) - Cloud Sphere: A 3D Shape Representation via Progressive Deformation [21.216503294296317]
This paper is dedicated to discovering distinctive information from the shape formation process.
A Progressive Deformation-based Auto-Encoder is proposed to learn the stage-aware description.
Experimental results show that the proposed PDAE has the ability to reconstruct 3D shapes with high fidelity.
arXiv Detail & Related papers (2021-12-21T12:10:23Z) - Learning to generate shape from global-local spectra [0.0]
We build our method on top of recent advances on the so called shape-from-spectrum paradigm.
We consider the spectrum as a natural and ready to use representation to encode variability of the shapes.
Our results confirm the improvement of the proposed approach in comparison to existing and alternative methods.
arXiv Detail & Related papers (2021-08-04T16:39:56Z) - Shape-Oriented Convolution Neural Network for Point Cloud Analysis [59.405388577930616]
Point cloud is a principal data structure adopted for 3D geometric information encoding.
Shape-oriented message passing scheme dubbed ShapeConv is proposed to focus on the representation learning of the underlying shape formed by each local neighboring point.
arXiv Detail & Related papers (2020-04-20T16:11:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.