PivotMesh: Generic 3D Mesh Generation via Pivot Vertices Guidance
- URL: http://arxiv.org/abs/2405.16890v1
- Date: Mon, 27 May 2024 07:13:13 GMT
- Title: PivotMesh: Generic 3D Mesh Generation via Pivot Vertices Guidance
- Authors: Haohan Weng, Yikai Wang, Tong Zhang, C. L. Philip Chen, Jun Zhu,
- Abstract summary: We introduce a generic and scalable mesh generation framework PivotMesh.
PivotMesh makes an initial attempt to extend the native mesh generation to large-scale datasets.
We show that PivotMesh can generate compact and sharp 3D meshes across various categories.
- Score: 66.40153183581894
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generating compact and sharply detailed 3D meshes poses a significant challenge for current 3D generative models. Different from extracting dense meshes from neural representation, some recent works try to model the native mesh distribution (i.e., a set of triangles), which generates more compact results as humans crafted. However, due to the complexity and variety of mesh topology, these methods are typically limited to small datasets with specific categories and are hard to extend. In this paper, we introduce a generic and scalable mesh generation framework PivotMesh, which makes an initial attempt to extend the native mesh generation to large-scale datasets. We employ a transformer-based auto-encoder to encode meshes into discrete tokens and decode them from face level to vertex level hierarchically. Subsequently, to model the complex typology, we first learn to generate pivot vertices as coarse mesh representation and then generate the complete mesh tokens with the same auto-regressive Transformer. This reduces the difficulty compared with directly modeling the mesh distribution and further improves the model controllability. PivotMesh demonstrates its versatility by effectively learning from both small datasets like Shapenet, and large-scale datasets like Objaverse and Objaverse-xl. Extensive experiments indicate that PivotMesh can generate compact and sharp 3D meshes across various categories, highlighting its great potential for native mesh modeling.
Related papers
- GenUDC: High Quality 3D Mesh Generation with Unsigned Dual Contouring Representation [13.923644541595893]
3D generative models generate high-quality meshes with complex structures and realistic surfaces.
We propose the GenUDC framework to address these challenges by leveraging the Unsigned Dual Contouring (UDC) as the mesh representation.
In addition, GenUDC adopts a two-stage, coarse-to-fine generative process for 3D mesh generation.
arXiv Detail & Related papers (2024-10-23T11:59:49Z) - SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - MeshXL: Neural Coordinate Field for Generative 3D Foundation Models [51.1972329762843]
We present a family of generative pre-trained auto-regressive models, which addresses the process of 3D mesh generation with modern large language model approaches.
MeshXL is able to generate high-quality 3D meshes, and can also serve as foundation models for various down-stream applications.
arXiv Detail & Related papers (2024-05-31T14:35:35Z) - GetMesh: A Controllable Model for High-quality Mesh Generation and Manipulation [25.42531640985281]
Mesh is a fundamental representation of 3D assets in various industrial applications, and is widely supported by professional softwares.
We propose a highly controllable generative model, GetMesh, for mesh generation and manipulation across different categories.
arXiv Detail & Related papers (2024-03-18T17:25:36Z) - Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability [118.26563926533517]
Auto-regressive models have achieved impressive results in 2D image generation by modeling joint distributions in grid space.
We extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.
arXiv Detail & Related papers (2024-02-19T15:33:09Z) - MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers [32.169007676811404]
MeshGPT is a new approach for generating triangle meshes that reflects the compactness typical of artist-created meshes.
Inspired by recent advances in powerful large language models, we adopt a sequence-based approach to autoregressively generate triangle meshes as triangles.
arXiv Detail & Related papers (2023-11-27T01:20:11Z) - MeshDiffusion: Score-based Generative 3D Mesh Modeling [68.40770889259143]
We consider the task of generating realistic 3D shapes for automatic scene generation and physical simulation.
We take advantage of the graph structure of meshes and use a simple yet very effective generative modeling method to generate 3D meshes.
Specifically, we represent meshes with deformable tetrahedral grids, and then train a diffusion model on this direct parametrization.
arXiv Detail & Related papers (2023-03-14T17:59:01Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Fully Convolutional Mesh Autoencoder using Efficient Spatially Varying
Kernels [41.81187438494441]
We propose a non-template-specific fully convolutional mesh autoencoder for arbitrary registered mesh data.
Our model outperforms state-of-the-art methods on reconstruction accuracy.
arXiv Detail & Related papers (2020-06-08T02:30:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.