Explorable Mesh Deformation Subspaces from Unstructured Generative
Models
- URL: http://arxiv.org/abs/2310.07814v1
- Date: Wed, 11 Oct 2023 18:53:57 GMT
- Title: Explorable Mesh Deformation Subspaces from Unstructured Generative
Models
- Authors: Arman Maesumi, Paul Guerrero, Vladimir G. Kim, Matthew Fisher,
Siddhartha Chaudhuri, Noam Aigerman, Daniel Ritchie
- Abstract summary: Deep generative models of 3D shapes often feature continuous latent spaces that can be used to explore potential variations.
We present a method to explore variations among a given set of landmark shapes by constructing a mapping from an easily-navigable 2D exploration space to a subspace of a pre-trained generative model.
- Score: 53.23510438769862
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Exploring variations of 3D shapes is a time-consuming process in traditional
3D modeling tools. Deep generative models of 3D shapes often feature continuous
latent spaces that can, in principle, be used to explore potential variations
starting from a set of input shapes. In practice, doing so can be problematic:
latent spaces are high dimensional and hard to visualize, contain shapes that
are not relevant to the input shapes, and linear paths through them often lead
to sub-optimal shape transitions. Furthermore, one would ideally be able to
explore variations in the original high-quality meshes used to train the
generative model, not its lower-quality output geometry. In this paper, we
present a method to explore variations among a given set of landmark shapes by
constructing a mapping from an easily-navigable 2D exploration space to a
subspace of a pre-trained generative model. We first describe how to find a
mapping that spans the set of input landmark shapes and exhibits smooth
variations between them. We then show how to turn the variations in this
subspace into deformation fields, to transfer those variations to high-quality
meshes for the landmark shapes. Our results show that our method can produce
visually-pleasing and easily-navigable 2D exploration spaces for several
different shape categories, especially as compared to prior work on learning
deformation spaces for 3D shapes.
Related papers
- NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - Spectral Meets Spatial: Harmonising 3D Shape Matching and Interpolation [50.376243444909136]
We present a unified framework to predict both point-wise correspondences and shape between 3D shapes.
We combine the deep functional map framework with classical surface deformation models to map shapes in both spectral and spatial domains.
arXiv Detail & Related papers (2024-02-29T07:26:23Z) - Michelangelo: Conditional 3D Shape Generation based on Shape-Image-Text
Aligned Latent Representation [47.945556996219295]
We present a novel alignment-before-generation approach to generate 3D shapes based on 2D images or texts.
Our framework comprises two models: a Shape-Image-Text-Aligned Variational Auto-Encoder (SITA-VAE) and a conditional Aligned Shape Latent Diffusion Model (ASLDM)
arXiv Detail & Related papers (2023-06-29T17:17:57Z) - Learning to Generate 3D Shapes from a Single Example [28.707149807472685]
We present a multi-scale GAN-based model designed to capture the input shape's geometric features across a range of spatial scales.
We train our generative model on a voxel pyramid of the reference shape, without the need of any external supervision or manual annotation.
The resulting shapes present variations across different scales, and at the same time retain the global structure of the reference shape.
arXiv Detail & Related papers (2022-08-05T01:05:32Z) - Pixel2Mesh++: 3D Mesh Generation and Refinement from Multi-View Images [82.32776379815712]
We study the problem of shape generation in 3D mesh representation from a small number of color images with or without camera poses.
We adopt to further improve the shape quality by leveraging cross-view information with a graph convolution network.
Our model is robust to the quality of the initial mesh and the error of camera pose, and can be combined with a differentiable function for test-time optimization.
arXiv Detail & Related papers (2022-04-21T03:42:31Z) - Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian [58.704089101826774]
We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images.
arXiv Detail & Related papers (2022-03-29T04:57:18Z) - Disentangling Geometric Deformation Spaces in Generative Latent Shape
Models [5.582957809895198]
A complete representation of 3D objects requires characterizing the space of deformations in an interpretable manner.
We improve on a prior generative model of disentanglement for 3D shapes, wherein the space of object geometry is factorized into rigid orientation, non-rigid pose, and intrinsic shape.
The resulting model can be trained from raw 3D shapes, without correspondences, labels, or even rigid alignment.
arXiv Detail & Related papers (2021-02-27T06:54:31Z) - Deformed Implicit Field: Modeling 3D Shapes with Learned Dense
Correspondence [30.849927968528238]
We propose a novel Deformed Implicit Field representation for modeling 3D shapes of a category.
Our neural network, dubbed DIF-Net, jointly learns a shape latent space and these fields for 3D objects belonging to a category.
Experiments show that DIF-Net not only produces high-fidelity 3D shapes but also builds high-quality dense correspondences across different shapes.
arXiv Detail & Related papers (2020-11-27T10:45:26Z) - ShapeFlow: Learnable Deformations Among 3D Shapes [28.854946339507123]
We present a flow-based model for learning a deformation space for entire classes of 3D shapes with large intra-class variations.
ShapeFlow allows learning a multi-template deformation space that is agnostic to shape topology, yet preserves fine geometric details.
arXiv Detail & Related papers (2020-06-14T19:03:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.