Mesh RAG: Retrieval Augmentation for Autoregressive Mesh Generation
- URL: http://arxiv.org/abs/2511.16807v1
- Date: Thu, 20 Nov 2025 21:13:56 GMT
- Title: Mesh RAG: Retrieval Augmentation for Autoregressive Mesh Generation
- Authors: Xiatao Sun, Chen Liang, Qian Wang, Daniel Rakita,
- Abstract summary: Mesh RAG is a training-free, plug-and-play framework for autoregressive mesh generation models.<n>Inspired by RAG for language models, our approach augments the generation process by leveraging point cloud segmentation.<n>We show it significantly enhances mesh quality, accelerates generation speed compared to sequential part prediction, and enables incremental editing.
- Score: 11.723535704837266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D meshes are a critical building block for applications ranging from industrial design and gaming to simulation and robotics. Traditionally, meshes are crafted manually by artists, a process that is time-intensive and difficult to scale. To automate and accelerate this asset creation, autoregressive models have emerged as a powerful paradigm for artistic mesh generation. However, current methods to enhance quality typically rely on larger models or longer sequences that result in longer generation time, and their inherent sequential nature imposes a severe quality-speed trade-off. This sequential dependency also significantly complicates incremental editing. To overcome these limitations, we propose Mesh RAG, a novel, training-free, plug-and-play framework for autoregressive mesh generation models. Inspired by RAG for language models, our approach augments the generation process by leveraging point cloud segmentation, spatial transformation, and point cloud registration to retrieve, generate, and integrate mesh components. This retrieval-based approach decouples generation from its strict sequential dependency, facilitating efficient and parallelizable inference. We demonstrate the wide applicability of Mesh RAG across various foundational autoregressive mesh generation models, showing it significantly enhances mesh quality, accelerates generation speed compared to sequential part prediction, and enables incremental editing, all without model retraining.
Related papers
- HiFi-Mesh: High-Fidelity Efficient 3D Mesh Generation via Compact Autoregressive Dependence [36.403921772528236]
We introduce the Latent Autoregressive Network (LANE), which incorporates compact autoregressive dependencies in the generation process.<n>LANE achieves a $6times$ improvement in maximum sequence length compared to existing methods.
arXiv Detail & Related papers (2026-01-29T06:22:26Z) - GriDiT: Factorized Grid-Based Diffusion for Efficient Long Image Sequence Generation [77.13582457917418]
We train a generative model solely on grid images comprising subsampled frames.<n>We learn to generate image sequences, using the strong self-attention mechanism of the Diffusion Transformer (DiT) to capture correlations between frames.<n>Our method consistently outperforms SoTA in quality and inference speed (at least twice-as-fast) across datasets.
arXiv Detail & Related papers (2025-12-24T16:46:04Z) - FlashMesh: Faster and Better Autoregressive Mesh Synthesis via Structured Speculation [65.3277633028397]
FlashMesh is a fast and high-fidelity mesh generation framework.<n>We show that FlashMesh achieves up to a 2 x speedup over standard autoregressive models.
arXiv Detail & Related papers (2025-11-19T17:03:49Z) - ARMesh: Autoregressive Mesh Generation via Next-Level-of-Detail Prediction [45.699110709239996]
We propose generating 3D meshes auto-regressively in a progressive coarse-to-fine manner.<n>Specifically, we view mesh simplification algorithms, which gradually merge mesh faces to build simpler meshes.<n>Our experiments show that this novel progressive mesh generation approach provides intuitive control over generation quality and time consumption.
arXiv Detail & Related papers (2025-09-25T07:12:02Z) - FastMesh: Efficient Artistic Mesh Generation via Component Decoupling [27.21354509059262]
Mesh generation approaches typically tokenize triangle meshes into sequences of tokens and train autoregressive models to generate these tokens sequentially.<n>This redundancy leads to excessively long token sequences and inefficient generation processes.<n>We propose an efficient framework that generates artistic meshes by treating vertices and faces separately.
arXiv Detail & Related papers (2025-08-26T16:51:02Z) - Fast Autoregressive Models for Continuous Latent Generation [49.079819389916764]
Autoregressive models have demonstrated remarkable success in sequential data generation, particularly in NLP.<n>Recent work, the masked autoregressive model (MAR) bypasses quantization by modeling per-token distributions in continuous spaces using a diffusion head.<n>We propose Fast AutoRegressive model (FAR), a novel framework that replaces MAR's diffusion head with a lightweight shortcut head.
arXiv Detail & Related papers (2025-04-24T13:57:08Z) - MeshCraft: Exploring Efficient and Controllable Mesh Generation with Flow-based DiTs [79.45006864728893]
MeshCraft is a framework for efficient and controllable mesh generation.<n>It uses continuous spatial diffusion to generate discrete triangle faces.<n>It can generate an 800-face mesh in just 3.2 seconds.
arXiv Detail & Related papers (2025-03-29T09:21:50Z) - Parallelized Autoregressive Visual Generation [65.9579525736345]
We propose a simple yet effective approach for parallelized autoregressive visual generation.<n>Our method achieves a 3.6x speedup with comparable quality and up to 9.5x speedup with minimal quality degradation across both image and video generation tasks.
arXiv Detail & Related papers (2024-12-19T17:59:54Z) - Learning Versatile 3D Shape Generation with Improved AR Models [91.87115744375052]
Auto-regressive (AR) models have achieved impressive results in 2D image generation by modeling joint distributions in the grid space.
We propose the Improved Auto-regressive Model (ImAM) for 3D shape generation, which applies discrete representation learning based on a latent vector instead of volumetric grids.
arXiv Detail & Related papers (2023-03-26T12:03:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.