GLASS: Geometric Latent Augmentation for Shape Spaces
- URL: http://arxiv.org/abs/2108.03225v2
- Date: Mon, 9 Aug 2021 17:34:42 GMT
- Title: GLASS: Geometric Latent Augmentation for Shape Spaces
- Authors: Sanjeev Muralikrishnan, Siddhartha Chaudhuri, Noam Aigerman, Vladimir
Kim, Matthew Fisher and Niloy Mitra
- Abstract summary: We use geometrically motivated energies to augment and thus boost a sparse collection of example (training) models.
We analyze the Hessian of the as-rigid-as-possible (ARAP) energy to sample from and project to the underlying (local) shape space.
We present multiple examples of interesting and meaningful shape variations even when starting from as few as 3-10 training shapes.
- Score: 28.533018136138825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate the problem of training generative models on a very sparse
collection of 3D models. We use geometrically motivated energies to augment and
thus boost a sparse collection of example (training) models. We analyze the
Hessian of the as-rigid-as-possible (ARAP) energy to sample from and project to
the underlying (local) shape space, and use the augmented dataset to train a
variational autoencoder (VAE). We iterate the process of building latent spaces
of VAE and augmenting the associated dataset, to progressively reveal a richer
and more expressive generative space for creating geometrically and
semantically valid samples. Our framework allows us to train generative 3D
models even with a small set of good quality 3D models, which are typically
hard to curate. We extensively evaluate our method against a set of strong
baselines, provide ablation studies and demonstrate application towards
establishing shape correspondences. We present multiple examples of interesting
and meaningful shape variations even when starting from as few as 3-10 training
shapes.
Related papers
- Outdoor Scene Extrapolation with Hierarchical Generative Cellular Automata [70.9375320609781]
We aim to generate fine-grained 3D geometry from large-scale sparse LiDAR scans, abundantly captured by autonomous vehicles (AV)
We propose hierarchical Generative Cellular Automata (hGCA), a spatially scalable 3D generative model, which grows geometry with local kernels following, in a coarse-to-fine manner, equipped with a light-weight planner to induce global consistency.
arXiv Detail & Related papers (2024-06-12T14:56:56Z) - ComboVerse: Compositional 3D Assets Creation Using Spatially-Aware Diffusion Guidance [76.7746870349809]
We present ComboVerse, a 3D generation framework that produces high-quality 3D assets with complex compositions by learning to combine multiple models.
Our proposed framework emphasizes spatial alignment of objects, compared with standard score distillation sampling.
arXiv Detail & Related papers (2024-03-19T03:39:43Z) - Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability [118.26563926533517]
Auto-regressive models have achieved impressive results in 2D image generation by modeling joint distributions in grid space.
We extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.
arXiv Detail & Related papers (2024-02-19T15:33:09Z) - Explorable Mesh Deformation Subspaces from Unstructured Generative
Models [53.23510438769862]
Deep generative models of 3D shapes often feature continuous latent spaces that can be used to explore potential variations.
We present a method to explore variations among a given set of landmark shapes by constructing a mapping from an easily-navigable 2D exploration space to a subspace of a pre-trained generative model.
arXiv Detail & Related papers (2023-10-11T18:53:57Z) - Pushing the Limits of 3D Shape Generation at Scale [65.24420181727615]
We present a significant breakthrough in 3D shape generation by scaling it to unprecedented dimensions.
We have developed a model with an astounding 3.6 billion trainable parameters, establishing it as the largest 3D shape generation model to date, named Argus-3D.
arXiv Detail & Related papers (2023-06-20T13:01:19Z) - Few-shot 3D Shape Generation [18.532357455856836]
We make the first attempt to realize few-shot 3D shape generation by adapting generative models pre-trained on large source domains to target domains using limited data.
Our approach only needs the silhouettes of few-shot target samples as training data to learn target geometry distributions.
arXiv Detail & Related papers (2023-05-19T13:30:10Z) - Learning Versatile 3D Shape Generation with Improved AR Models [91.87115744375052]
Auto-regressive (AR) models have achieved impressive results in 2D image generation by modeling joint distributions in the grid space.
We propose the Improved Auto-regressive Model (ImAM) for 3D shape generation, which applies discrete representation learning based on a latent vector instead of volumetric grids.
arXiv Detail & Related papers (2023-03-26T12:03:18Z) - Learning to Generate 3D Shapes from a Single Example [28.707149807472685]
We present a multi-scale GAN-based model designed to capture the input shape's geometric features across a range of spatial scales.
We train our generative model on a voxel pyramid of the reference shape, without the need of any external supervision or manual annotation.
The resulting shapes present variations across different scales, and at the same time retain the global structure of the reference shape.
arXiv Detail & Related papers (2022-08-05T01:05:32Z) - Discrete Point Flow Networks for Efficient Point Cloud Generation [36.03093265136374]
Generative models have proven effective at modeling 3D shapes and their statistical variations.
We introduce a latent variable model that builds on normalizing flows to generate 3D point clouds of an arbitrary size.
For single-view shape reconstruction we also obtain results on par with state-of-the-art voxel, point cloud, and mesh-based methods.
arXiv Detail & Related papers (2020-07-20T14:48:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.