MeshAnything V2: Artist-Created Mesh Generation With Adjacent Mesh Tokenization
- URL: http://arxiv.org/abs/2408.02555v2
- Date: Wed, 20 Nov 2024 09:20:09 GMT
- Title: MeshAnything V2: Artist-Created Mesh Generation With Adjacent Mesh Tokenization
- Authors: Yiwen Chen, Yikai Wang, Yihao Luo, Zhengyi Wang, Zilong Chen, Jun Zhu, Chi Zhang, Guosheng Lin,
- Abstract summary: MeshAnything V2 is an advanced mesh generation model designed to create Artist-Created Meshes.
A key innovation behind MeshAnything V2 is our novel Adjacent Mesh Tokenization (AMT) method.
- Score: 65.15226276553891
- License:
- Abstract: Meshes are the de facto 3D representation in the industry but are labor-intensive to produce. Recently, a line of research has focused on autoregressively generating meshes. This approach processes meshes into a sequence composed of vertices and then generates them vertex by vertex, similar to how a language model generates text. These methods have achieved some success but still struggle to generate complex meshes. One primary reason for this limitation is their inefficient tokenization methods. To address this issue, we introduce MeshAnything V2, an advanced mesh generation model designed to create Artist-Created Meshes that align precisely with specified shapes. A key innovation behind MeshAnything V2 is our novel Adjacent Mesh Tokenization (AMT) method. Unlike traditional approaches that represent each face using three vertices, AMT optimizes this by employing a single vertex wherever feasible, effectively reducing the token sequence length by about half on average. This not only streamlines the tokenization process but also results in more compact and well-structured sequences, enhancing the efficiency of mesh generation. With these improvements, MeshAnything V2 effectively doubles the face limit compared to previous models, delivering superior performance without increasing computational costs. We will make our code and models publicly available. Project Page: https://buaacyw.github.io/meshanything-v2/
Related papers
- DMesh++: An Efficient Differentiable Mesh for Complex Shapes [51.75054400014161]
We introduce a new differentiable mesh processing method in 2D and 3D.
We present an algorithm that adapts the mesh resolution to local geometry in 2D for efficient representation.
We demonstrate the effectiveness of our approach on 2D point cloud and 3D multi-view reconstruction tasks.
arXiv Detail & Related papers (2024-12-21T21:16:03Z) - GenUDC: High Quality 3D Mesh Generation with Unsigned Dual Contouring Representation [13.923644541595893]
3D generative models generate high-quality meshes with complex structures and realistic surfaces.
We propose the GenUDC framework to address these challenges by leveraging the Unsigned Dual Contouring (UDC) as the mesh representation.
In addition, GenUDC adopts a two-stage, coarse-to-fine generative process for 3D mesh generation.
arXiv Detail & Related papers (2024-10-23T11:59:49Z) - MeshAnything: Artist-Created Mesh Generation with Autoregressive Transformers [76.70891862458384]
We introduce MeshAnything, a model that treats mesh extraction as a generation problem.
By converting 3D assets in any 3D representation into AMs, MeshAnything can be integrated with various 3D asset production methods.
Our method generates AMs with hundreds of times fewer faces, significantly improving storage, rendering, and simulation efficiencies.
arXiv Detail & Related papers (2024-06-14T16:30:25Z) - MeshXL: Neural Coordinate Field for Generative 3D Foundation Models [51.1972329762843]
We present a family of generative pre-trained auto-regressive models, which addresses the process of 3D mesh generation with modern large language model approaches.
MeshXL is able to generate high-quality 3D meshes, and can also serve as foundation models for various down-stream applications.
arXiv Detail & Related papers (2024-05-31T14:35:35Z) - PivotMesh: Generic 3D Mesh Generation via Pivot Vertices Guidance [66.40153183581894]
We introduce a generic and scalable mesh generation framework PivotMesh.
PivotMesh makes an initial attempt to extend the native mesh generation to large-scale datasets.
We show that PivotMesh can generate compact and sharp 3D meshes across various categories.
arXiv Detail & Related papers (2024-05-27T07:13:13Z) - GetMesh: A Controllable Model for High-quality Mesh Generation and Manipulation [25.42531640985281]
Mesh is a fundamental representation of 3D assets in various industrial applications, and is widely supported by professional softwares.
We propose a highly controllable generative model, GetMesh, for mesh generation and manipulation across different categories.
arXiv Detail & Related papers (2024-03-18T17:25:36Z) - MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers [32.169007676811404]
MeshGPT is a new approach for generating triangle meshes that reflects the compactness typical of artist-created meshes.
Inspired by recent advances in powerful large language models, we adopt a sequence-based approach to autoregressively generate triangle meshes as triangles.
arXiv Detail & Related papers (2023-11-27T01:20:11Z) - MeshDiffusion: Score-based Generative 3D Mesh Modeling [68.40770889259143]
We consider the task of generating realistic 3D shapes for automatic scene generation and physical simulation.
We take advantage of the graph structure of meshes and use a simple yet very effective generative modeling method to generate 3D meshes.
Specifically, we represent meshes with deformable tetrahedral grids, and then train a diffusion model on this direct parametrization.
arXiv Detail & Related papers (2023-03-14T17:59:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.