ShapeAssembly: Learning to Generate Programs for 3D Shape Structure
Synthesis
- URL: http://arxiv.org/abs/2009.08026v1
- Date: Thu, 17 Sep 2020 02:26:45 GMT
- Title: ShapeAssembly: Learning to Generate Programs for 3D Shape Structure
Synthesis
- Authors: R. Kenny Jones, Theresa Barton, Xianghao Xu, Kai Wang, Ellen Jiang,
Paul Guerrero, Niloy J. Mitra, and Daniel Ritchie
- Abstract summary: We propose ShapeAssembly, a domain-specific "assembly-language" for 3D shape structures.
We show how to extract ShapeAssembly programs from existing shape structures in the PartNet dataset.
We evaluate our approach by comparing shapes output by our generated programs to those from other recent shape structure models.
- Score: 38.27280837835169
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Manually authoring 3D shapes is difficult and time consuming; generative
models of 3D shapes offer compelling alternatives. Procedural representations
are one such possibility: they offer high-quality and editable results but are
difficult to author and often produce outputs with limited diversity. On the
other extreme are deep generative models: given enough data, they can learn to
generate any class of shape but their outputs have artifacts and the
representation is not editable. In this paper, we take a step towards achieving
the best of both worlds for novel 3D shape synthesis. We propose ShapeAssembly,
a domain-specific "assembly-language" for 3D shape structures. ShapeAssembly
programs construct shapes by declaring cuboid part proxies and attaching them
to one another, in a hierarchical and symmetrical fashion. Its functions are
parameterized with free variables, so that one program structure is able to
capture a family of related shapes. We show how to extract ShapeAssembly
programs from existing shape structures in the PartNet dataset. Then we train a
deep generative model, a hierarchical sequence VAE, that learns to write novel
ShapeAssembly programs. The program captures the subset of variability that is
interpretable and editable. The deep model captures correlations across shape
collections that are hard to express procedurally. We evaluate our approach by
comparing shapes output by our generated programs to those from other recent
shape structure synthesis models. We find that our generated shapes are more
plausible and physically-valid than those of other methods. Additionally, we
assess the latent spaces of these models, and find that ours is better
structured and produces smoother interpolations. As an application, we use our
generative model and differentiable program interpreter to infer and fit shape
programs to unstructured geometry, such as point clouds.
Related papers
- NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - Make-A-Shape: a Ten-Million-scale 3D Shape Model [52.701745578415796]
This paper introduces Make-A-Shape, a new 3D generative model designed for efficient training on a vast scale.
We first innovate a wavelet-tree representation to compactly encode shapes by formulating the subband coefficient filtering scheme.
We derive the subband adaptive training strategy to train our model to effectively learn to generate coarse and detail wavelet coefficients.
arXiv Detail & Related papers (2024-01-20T00:21:58Z) - FullFormer: Generating Shapes Inside Shapes [9.195909458772187]
We present the first implicit generative model that facilitates the generation of complex 3D shapes with rich internal geometric details.
Our model uses unsigned distance fields to represent nested 3D surfaces allowing learning from non-watertight mesh data.
We demonstrate that our model achieves state-of-the-art point cloud generation results on popular classes of 'Cars', 'Planes', and 'Chairs' of the ShapeNet dataset.
arXiv Detail & Related papers (2023-03-20T16:19:23Z) - Learning to Generate 3D Shapes from a Single Example [28.707149807472685]
We present a multi-scale GAN-based model designed to capture the input shape's geometric features across a range of spatial scales.
We train our generative model on a voxel pyramid of the reference shape, without the need of any external supervision or manual annotation.
The resulting shapes present variations across different scales, and at the same time retain the global structure of the reference shape.
arXiv Detail & Related papers (2022-08-05T01:05:32Z) - ShapeCrafter: A Recursive Text-Conditioned 3D Shape Generation Model [16.431391515731367]
Existing methods to generate text-conditioned 3D shapes consume an entire text prompt to generate a 3D shape in a single step.
We introduce a method to generate a 3D shape distribution conditioned on an initial phrase, that gradually evolves as more phrases are added.
Results show that our method can generate shapes consistent with text descriptions, and shapes evolve gradually as more phrases are added.
arXiv Detail & Related papers (2022-07-19T17:59:01Z) - Towards Implicit Text-Guided 3D Shape Generation [81.22491096132507]
This work explores the challenging task of generating 3D shapes from text.
We propose a new approach for text-guided 3D shape generation, capable of producing high-fidelity shapes with colors that match the given text description.
arXiv Detail & Related papers (2022-03-28T10:20:03Z) - LSD-StructureNet: Modeling Levels of Structural Detail in 3D Part
Hierarchies [5.173975064973631]
We introduce LSD-StructureNet, an augmentation to the StructureNet architecture that enables re-generation of parts.
We evaluate LSD-StructureNet on the PartNet dataset, the largest dataset of 3D shapes represented by hierarchies of parts.
arXiv Detail & Related papers (2021-08-18T15:05:06Z) - SP-GAN: Sphere-Guided 3D Shape Generation and Manipulation [50.53931728235875]
We present SP-GAN, a new unsupervised sphere-guided generative model for direct synthesis of 3D shapes in the form of point clouds.
Compared with existing models, SP-GAN is able to synthesize diverse and high-quality shapes with fine details.
arXiv Detail & Related papers (2021-08-10T06:49:45Z) - DSG-Net: Learning Disentangled Structure and Geometry for 3D Shape
Generation [98.96086261213578]
We introduce DSG-Net, a deep neural network that learns a disentangled structured and geometric mesh representation for 3D shapes.
This supports a range of novel shape generation applications with disentangled control, such as of structure (geometry) while keeping geometry (structure) unchanged.
Our method not only supports controllable generation applications but also produces high-quality synthesized shapes, outperforming state-of-the-art methods.
arXiv Detail & Related papers (2020-08-12T17:06:51Z) - Learning Generative Models of Shape Handles [43.41382075567803]
We present a generative model to synthesize 3D shapes as sets of handles.
Our model can generate handle sets with varying cardinality and different types of handles.
We show that the resulting shape representations are intuitive and achieve superior quality than previous state-of-the-art.
arXiv Detail & Related papers (2020-04-06T22:35:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.