MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers
- URL: http://arxiv.org/abs/2311.15475v1
- Date: Mon, 27 Nov 2023 01:20:11 GMT
- Title: MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers
- Authors: Yawar Siddiqui, Antonio Alliegro, Alexey Artemov, Tatiana Tommasi,
Daniele Sirigatti, Vladislav Rosov, Angela Dai, Matthias Nie{\ss}ner
- Abstract summary: MeshGPT is a new approach for generating triangle meshes that reflects the compactness typical of artist-created meshes.
Inspired by recent advances in powerful large language models, we adopt a sequence-based approach to autoregressively generate triangle meshes as triangles.
- Score: 32.169007676811404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce MeshGPT, a new approach for generating triangle meshes that
reflects the compactness typical of artist-created meshes, in contrast to dense
triangle meshes extracted by iso-surfacing methods from neural fields. Inspired
by recent advances in powerful large language models, we adopt a sequence-based
approach to autoregressively generate triangle meshes as sequences of
triangles. We first learn a vocabulary of latent quantized embeddings, using
graph convolutions, which inform these embeddings of the local mesh geometry
and topology. These embeddings are sequenced and decoded into triangles by a
decoder, ensuring that they can effectively reconstruct the mesh. A transformer
is then trained on this learned vocabulary to predict the index of the next
embedding given previous embeddings. Once trained, our model can be
autoregressively sampled to generate new triangle meshes, directly generating
compact meshes with sharp edges, more closely imitating the efficient
triangulation patterns of human-crafted meshes. MeshGPT demonstrates a notable
improvement over state of the art mesh generation methods, with a 9% increase
in shape coverage and a 30-point enhancement in FID scores across various
categories.
Related papers
- Generating 3D House Wireframes with Semantics [11.408526398063712]
We present a new approach for generating 3D house with semantic enrichment using an autoregressive model.
By re-ordering wire sequences based on semantic meanings, we employ a seamless semantic sequence for learning on 3D wireframe structures.
arXiv Detail & Related papers (2024-07-17T02:33:34Z) - PivotMesh: Generic 3D Mesh Generation via Pivot Vertices Guidance [66.40153183581894]
We introduce a generic and scalable mesh generation framework PivotMesh.
PivotMesh makes an initial attempt to extend the native mesh generation to large-scale datasets.
We show that PivotMesh can generate compact and sharp 3D meshes across various categories.
arXiv Detail & Related papers (2024-05-27T07:13:13Z) - CircNet: Meshing 3D Point Clouds with Circumcenter Detection [67.23307214942696]
Reconstructing 3D point clouds into triangle meshes is a key problem in computational geometry and surface reconstruction.
We introduce a deep neural network that detects the circumcenters to achieve point cloud triangulation.
We validate our method on prominent datasets of both watertight and open surfaces.
arXiv Detail & Related papers (2023-01-23T03:32:57Z) - NeuralMeshing: Differentiable Meshing of Implicit Neural Representations [63.18340058854517]
We propose a novel differentiable meshing algorithm for extracting surface meshes from neural implicit representations.
Our method produces meshes with regular tessellation patterns and fewer triangle faces compared to existing methods.
arXiv Detail & Related papers (2022-10-05T16:52:25Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - Deep Geometric Texture Synthesis [83.9404865744028]
We propose a novel framework for synthesizing geometric textures.
It learns texture statistics from local neighborhoods of a single reference 3D model.
Our network displaces mesh vertices in any direction, enabling synthesis of geometric textures.
arXiv Detail & Related papers (2020-06-30T19:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.