DetailGen3D: Generative 3D Geometry Enhancement via Data-Dependent Flow
- URL: http://arxiv.org/abs/2411.16820v1
- Date: Mon, 25 Nov 2024 17:08:17 GMT
- Title: DetailGen3D: Generative 3D Geometry Enhancement via Data-Dependent Flow
- Authors: Ken Deng, Yuanchen Guo, Jingxiang Sun, Zixin Zou, Yangguang Li, Xin Cai, Yanpei Cao, Yebin Liu, Ding Liang,
- Abstract summary: DetailGen3D is a generative approach specifically designed to enhance generated 3D shapes.
Our key insight is to model the coarse-to-fine transformation directly through data-dependent flows in latent space.
We introduce a token matching strategy that ensures accurate spatial correspondence during refinement.
- Score: 44.72037991063735
- License:
- Abstract: Modern 3D generation methods can rapidly create shapes from sparse or single views, but their outputs often lack geometric detail due to computational constraints. We present DetailGen3D, a generative approach specifically designed to enhance these generated 3D shapes. Our key insight is to model the coarse-to-fine transformation directly through data-dependent flows in latent space, avoiding the computational overhead of large-scale 3D generative models. We introduce a token matching strategy that ensures accurate spatial correspondence during refinement, enabling local detail synthesis while preserving global structure. By carefully designing our training data to match the characteristics of synthesized coarse shapes, our method can effectively enhance shapes produced by various 3D generation and reconstruction approaches, from single-view to sparse multi-view inputs. Extensive experiments demonstrate that DetailGen3D achieves high-fidelity geometric detail synthesis while maintaining efficiency in training.
Related papers
- DECOLLAGE: 3D Detailization by Controllable, Localized, and Learned Geometry Enhancement [38.719572669042925]
We present a 3D modeling method which enables end-users to refine or detailize 3D shapes using machine learning.
We show that our ability to localize details enables novel interactive creative and applications.
arXiv Detail & Related papers (2024-09-10T00:51:49Z) - NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability [118.26563926533517]
Auto-regressive models have achieved impressive results in 2D image generation by modeling joint distributions in grid space.
We extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.
arXiv Detail & Related papers (2024-02-19T15:33:09Z) - Pushing the Limits of 3D Shape Generation at Scale [65.24420181727615]
We present a significant breakthrough in 3D shape generation by scaling it to unprecedented dimensions.
We have developed a model with an astounding 3.6 billion trainable parameters, establishing it as the largest 3D shape generation model to date, named Argus-3D.
arXiv Detail & Related papers (2023-06-20T13:01:19Z) - FullFormer: Generating Shapes Inside Shapes [9.195909458772187]
We present the first implicit generative model that facilitates the generation of complex 3D shapes with rich internal geometric details.
Our model uses unsigned distance fields to represent nested 3D surfaces allowing learning from non-watertight mesh data.
We demonstrate that our model achieves state-of-the-art point cloud generation results on popular classes of 'Cars', 'Planes', and 'Chairs' of the ShapeNet dataset.
arXiv Detail & Related papers (2023-03-20T16:19:23Z) - Learning to Generate 3D Shapes from a Single Example [28.707149807472685]
We present a multi-scale GAN-based model designed to capture the input shape's geometric features across a range of spatial scales.
We train our generative model on a voxel pyramid of the reference shape, without the need of any external supervision or manual annotation.
The resulting shapes present variations across different scales, and at the same time retain the global structure of the reference shape.
arXiv Detail & Related papers (2022-08-05T01:05:32Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - GLASS: Geometric Latent Augmentation for Shape Spaces [28.533018136138825]
We use geometrically motivated energies to augment and thus boost a sparse collection of example (training) models.
We analyze the Hessian of the as-rigid-as-possible (ARAP) energy to sample from and project to the underlying (local) shape space.
We present multiple examples of interesting and meaningful shape variations even when starting from as few as 3-10 training shapes.
arXiv Detail & Related papers (2021-08-06T17:56:23Z) - DECOR-GAN: 3D Shape Detailization by Conditional Refinement [50.8801457082181]
We introduce a deep generative network for 3D shape detailization, akin to stylization with the style being geometric details.
We demonstrate that our method can refine a coarse shape into a variety of detailed shapes with different styles.
arXiv Detail & Related papers (2020-12-16T18:52:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.