FullFormer: Generating Shapes Inside Shapes
- URL: http://arxiv.org/abs/2303.11235v1
- Date: Mon, 20 Mar 2023 16:19:23 GMT
- Title: FullFormer: Generating Shapes Inside Shapes
- Authors: Tejaswini Medi, Jawad Tayyub, Muhammad Sarmad, Frank Lindseth and
Margret Keuper
- Abstract summary: We present the first implicit generative model that facilitates the generation of complex 3D shapes with rich internal geometric details.
Our model uses unsigned distance fields to represent nested 3D surfaces allowing learning from non-watertight mesh data.
We demonstrate that our model achieves state-of-the-art point cloud generation results on popular classes of 'Cars', 'Planes', and 'Chairs' of the ShapeNet dataset.
- Score: 9.195909458772187
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Implicit generative models have been widely employed to model 3D data and
have recently proven to be successful in encoding and generating high-quality
3D shapes. This work builds upon these models and alleviates current
limitations by presenting the first implicit generative model that facilitates
the generation of complex 3D shapes with rich internal geometric details. To
achieve this, our model uses unsigned distance fields to represent nested 3D
surfaces allowing learning from non-watertight mesh data. We propose a
transformer-based autoregressive model for 3D shape generation that leverages
context-rich tokens from vector quantized shape embeddings. The generated
tokens are decoded into an unsigned distance field which is rendered into a
novel 3D shape exhibiting a rich internal structure. We demonstrate that our
model achieves state-of-the-art point cloud generation results on popular
classes of 'Cars', 'Planes', and 'Chairs' of the ShapeNet dataset.
Additionally, we curate a dataset that exclusively comprises shapes with
realistic internal details from the `Cars' class of ShapeNet and demonstrate
our method's efficacy in generating these shapes with internal geometry.
Related papers
- DetailGen3D: Generative 3D Geometry Enhancement via Data-Dependent Flow [44.72037991063735]
DetailGen3D is a generative approach specifically designed to enhance generated 3D shapes.
Our key insight is to model the coarse-to-fine transformation directly through data-dependent flows in latent space.
We introduce a token matching strategy that ensures accurate spatial correspondence during refinement.
arXiv Detail & Related papers (2024-11-25T17:08:17Z) - MeshXL: Neural Coordinate Field for Generative 3D Foundation Models [51.1972329762843]
We present a family of generative pre-trained auto-regressive models, which addresses the process of 3D mesh generation with modern large language model approaches.
MeshXL is able to generate high-quality 3D meshes, and can also serve as foundation models for various down-stream applications.
arXiv Detail & Related papers (2024-05-31T14:35:35Z) - NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability [118.26563926533517]
Auto-regressive models have achieved impressive results in 2D image generation by modeling joint distributions in grid space.
We extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.
arXiv Detail & Related papers (2024-02-19T15:33:09Z) - 3D Semantic Subspace Traverser: Empowering 3D Generative Model with
Shape Editing Capability [13.041974495083197]
Previous studies on 3D shape generation have focused on shape quality and structure, without or less considering the importance of semantic information.
We propose a novel semantic generative model named 3D Semantic Subspace Traverser.
Our method can produce plausible shapes with complex structures and enable the editing of semantic attributes.
arXiv Detail & Related papers (2023-07-26T09:04:27Z) - 3D VR Sketch Guided 3D Shape Prototyping and Exploration [108.6809158245037]
We propose a 3D shape generation network that takes a 3D VR sketch as a condition.
We assume that sketches are created by novices without art training.
Our method creates multiple 3D shapes that align with the original sketch's structure.
arXiv Detail & Related papers (2023-06-19T10:27:24Z) - Learning Versatile 3D Shape Generation with Improved AR Models [91.87115744375052]
Auto-regressive (AR) models have achieved impressive results in 2D image generation by modeling joint distributions in the grid space.
We propose the Improved Auto-regressive Model (ImAM) for 3D shape generation, which applies discrete representation learning based on a latent vector instead of volumetric grids.
arXiv Detail & Related papers (2023-03-26T12:03:18Z) - Learning to Generate 3D Shapes from a Single Example [28.707149807472685]
We present a multi-scale GAN-based model designed to capture the input shape's geometric features across a range of spatial scales.
We train our generative model on a voxel pyramid of the reference shape, without the need of any external supervision or manual annotation.
The resulting shapes present variations across different scales, and at the same time retain the global structure of the reference shape.
arXiv Detail & Related papers (2022-08-05T01:05:32Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Discrete Point Flow Networks for Efficient Point Cloud Generation [36.03093265136374]
Generative models have proven effective at modeling 3D shapes and their statistical variations.
We introduce a latent variable model that builds on normalizing flows to generate 3D point clouds of an arbitrary size.
For single-view shape reconstruction we also obtain results on par with state-of-the-art voxel, point cloud, and mesh-based methods.
arXiv Detail & Related papers (2020-07-20T14:48:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.