Conformal Generative Modeling on Triangulated Surfaces
- URL: http://arxiv.org/abs/2303.10251v1
- Date: Fri, 17 Mar 2023 21:06:47 GMT
- Title: Conformal Generative Modeling on Triangulated Surfaces
- Authors: Victor Dorobantu, Charlotte Borcherds, Yisong Yue
- Abstract summary: We propose conformal generative modeling, a framework for generative modeling on 2D surfaces approximated by discrete triangle meshes.
Our approach leverages advances in discrete conformal geometry to develop a map from a source triangle mesh to a target triangle mesh of a simple manifold such as a sphere.
- Score: 31.870141076085716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose conformal generative modeling, a framework for generative modeling
on 2D surfaces approximated by discrete triangle meshes. Our approach leverages
advances in discrete conformal geometry to develop a map from a source triangle
mesh to a target triangle mesh of a simple manifold such as a sphere. After
accounting for errors due to the mesh discretization, we can use any generative
modeling approach developed for simple manifolds as a plug-and-play subroutine.
We demonstrate our framework on multiple complicated manifolds and multiple
generative modeling subroutines, where we show that our approach can learn good
estimates of distributions on meshes from samples, and can also learn
simultaneously from multiple distinct meshes of the same underlying manifold.
Related papers
- SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - What Secrets Do Your Manifolds Hold? Understanding the Local Geometry of Generative Models [17.273596999339077]
We study the local geometry of the learned manifold and its relationship to generation outcomes for a wide range of generative models.
We provide quantitative and qualitative evidence showing that for a given latent-image pair, the local descriptors are indicative of generation aesthetics, diversity, and memorization by the generative model.
arXiv Detail & Related papers (2024-08-15T17:59:06Z) - Topology-Aware Latent Diffusion for 3D Shape Generation [20.358373670117537]
We introduce a new generative model that combines latent diffusion with persistent homology to create 3D shapes with high diversity.
Our method involves representing 3D shapes as implicit fields, then employing persistent homology to extract topological features.
arXiv Detail & Related papers (2024-01-31T05:13:53Z) - PolyDiff: Generating 3D Polygonal Meshes with Diffusion Models [15.846449180313778]
PolyDiff is the first diffusion-based approach capable of directly generating realistic and diverse 3D polygonal meshes.
Our model is capable of producing high-quality 3D polygonal meshes, ready for integration into downstream 3D.
arXiv Detail & Related papers (2023-12-18T18:19:26Z) - Learning Versatile 3D Shape Generation with Improved AR Models [91.87115744375052]
Auto-regressive (AR) models have achieved impressive results in 2D image generation by modeling joint distributions in the grid space.
We propose the Improved Auto-regressive Model (ImAM) for 3D shape generation, which applies discrete representation learning based on a latent vector instead of volumetric grids.
arXiv Detail & Related papers (2023-03-26T12:03:18Z) - MeshDiffusion: Score-based Generative 3D Mesh Modeling [68.40770889259143]
We consider the task of generating realistic 3D shapes for automatic scene generation and physical simulation.
We take advantage of the graph structure of meshes and use a simple yet very effective generative modeling method to generate 3D meshes.
Specifically, we represent meshes with deformable tetrahedral grids, and then train a diffusion model on this direct parametrization.
arXiv Detail & Related papers (2023-03-14T17:59:01Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and
Manipulation [54.09274684734721]
We present a new approach for 3D shape generation, inversion, and manipulation, through a direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
We may jointly train an encoder network to learn a latent space for inverting shapes, allowing us to enable a rich variety of whole-shape and region-aware shape manipulations.
arXiv Detail & Related papers (2023-02-01T02:47:53Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation [52.038346313823524]
This paper presents a new approach for 3D shape generation, enabling direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
arXiv Detail & Related papers (2022-09-19T02:51:48Z) - Automated LoD-2 Model Reconstruction from Very-HighResolution
Satellite-derived Digital Surface Model and Orthophoto [1.2691047660244335]
We propose a model-driven method that reconstructs LoD-2 building models following a "decomposition-optimization-fitting" paradigm.
Our proposed method has addressed a few technical caveats over existing methods, resulting in practically high-quality results.
arXiv Detail & Related papers (2021-09-08T19:03:09Z) - GLASS: Geometric Latent Augmentation for Shape Spaces [28.533018136138825]
We use geometrically motivated energies to augment and thus boost a sparse collection of example (training) models.
We analyze the Hessian of the as-rigid-as-possible (ARAP) energy to sample from and project to the underlying (local) shape space.
We present multiple examples of interesting and meaningful shape variations even when starting from as few as 3-10 training shapes.
arXiv Detail & Related papers (2021-08-06T17:56:23Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.