MeshPad: Interactive Sketch-Conditioned Artist-Designed Mesh Generation and Editing
- URL: http://arxiv.org/abs/2503.01425v2
- Date: Mon, 17 Mar 2025 04:08:48 GMT
- Title: MeshPad: Interactive Sketch-Conditioned Artist-Designed Mesh Generation and Editing
- Authors: Haoxuan Li, Ziya Erkoc, Lei Li, Daniele Sirigatti, Vladyslav Rozov, Angela Dai, Matthias Nießner,
- Abstract summary: MeshPad is a generative approach that creates 3D meshes from sketch inputs.<n>We focus on enabling consistent edits by decomposing editing into 'deletion' of regions of a mesh, followed by 'addition' of new mesh geometry.<n>Our approach is based on a triangle sequence-based mesh representation, exploiting a large Transformer model for mesh triangle addition and deletion.
- Score: 64.84885028248395
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce MeshPad, a generative approach that creates 3D meshes from sketch inputs. Building on recent advances in artist-designed triangle mesh generation, our approach addresses the need for interactive mesh creation. To this end, we focus on enabling consistent edits by decomposing editing into 'deletion' of regions of a mesh, followed by 'addition' of new mesh geometry. Both operations are invoked by simple user edits of a sketch image, facilitating an iterative content creation process and enabling the construction of complex 3D meshes. Our approach is based on a triangle sequence-based mesh representation, exploiting a large Transformer model for mesh triangle addition and deletion. In order to perform edits interactively, we introduce a vertex-aligned speculative prediction strategy on top of our additive mesh generator. This speculator predicts multiple output tokens corresponding to a vertex, thus significantly reducing the computational cost of inference and accelerating the editing process, making it possible to execute each editing step in only a few seconds. Comprehensive experiments demonstrate that MeshPad outperforms state-of-the-art sketch-conditioned mesh generation methods, achieving more than 22% mesh quality improvement in Chamfer distance, and being preferred by 90% of participants in perceptual evaluations.
Related papers
- MeshCraft: Exploring Efficient and Controllable Mesh Generation with Flow-based DiTs [79.45006864728893]
MeshCraft is a framework for efficient and controllable mesh generation.
It uses continuous spatial diffusion to generate discrete triangle faces.
It can generate an 800-face mesh in just 3.2 seconds.
arXiv Detail & Related papers (2025-03-29T09:21:50Z) - LEMON: Localized Editing with Mesh Optimization and Neural Shaders [0.5499187928849248]
We propose LEMON, a mesh editing pipeline that combines neural deferred shading with localized mesh optimization.
We evaluate our pipeline using the DTU dataset, demonstrating that it generates finely-edited meshes more rapidly than the current state-of-the-art methods.
arXiv Detail & Related papers (2024-09-18T14:34:06Z) - MeshAnything V2: Artist-Created Mesh Generation With Adjacent Mesh Tokenization [65.15226276553891]
MeshAnything V2 is an advanced mesh generation model designed to create Artist-Created Meshes.<n>A key innovation behind MeshAnything V2 is our novel Adjacent Mesh Tokenization (AMT) method.
arXiv Detail & Related papers (2024-08-05T15:33:45Z) - Text-guided Controllable Mesh Refinement for Interactive 3D Modeling [48.226234898333]
We propose a novel technique for adding geometric details to an input coarse 3D mesh guided by a text prompt.
First, we generate a single-view RGB image conditioned on the input coarse geometry and the input text prompt.
Second, we use our novel multi-view normal generation architecture to jointly generate six different views of the normal images.
Third, we optimize our mesh with respect to all views and generate a fine, detailed geometry as output.
arXiv Detail & Related papers (2024-06-03T17:59:43Z) - SplatMesh: Interactive 3D Segmentation and Editing Using Mesh-Based Gaussian Splatting [86.50200613220674]
A key challenge in 3D-based interactive editing is the absence of an efficient representation that balances diverse modifications with high-quality view synthesis under a given memory constraint.
We introduce SplatMesh, a novel fine-grained interactive 3D segmentation and editing algorithm that integrates 3D Gaussian Splatting with a precomputed mesh.
By segmenting and editing the simplified mesh, we can effectively edit the Gaussian splats as well, which will lead to extensive experiments on real and synthetic datasets.
arXiv Detail & Related papers (2023-12-26T02:50:42Z) - MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers [32.169007676811404]
MeshGPT is a new approach for generating triangle meshes that reflects the compactness typical of artist-created meshes.
Inspired by recent advances in powerful large language models, we adopt a sequence-based approach to autoregressively generate triangle meshes as triangles.
arXiv Detail & Related papers (2023-11-27T01:20:11Z) - 3D Neural Sculpting (3DNS): Editing Neural Signed Distance Functions [34.39282814876276]
In this work, we propose the first method for efficient interactive editing of signed distance functions expressed through neural networks.
Inspired by 3D sculpting software for meshes, we use a brush-based framework that is intuitive and can in the future be used by sculptors and digital artists.
arXiv Detail & Related papers (2022-09-28T10:05:16Z) - NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for
Geometry and Texture Editing [39.71252429542249]
We present a novel mesh-based representation by encoding the neural implicit field with disentangled geometry and texture codes on mesh vertices.
We develop several techniques including learnable sign indicators to magnify spatial distinguishability of mesh-based representation.
Experiments and editing examples on both real and synthetic data demonstrate the superiority of our method on representation quality and editing ability.
arXiv Detail & Related papers (2022-07-25T05:30:50Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.