InstaInpaint: Instant 3D-Scene Inpainting with Masked Large Reconstruction Model
- URL: http://arxiv.org/abs/2506.10980v1
- Date: Thu, 12 Jun 2025 17:59:55 GMT
- Title: InstaInpaint: Instant 3D-Scene Inpainting with Masked Large Reconstruction Model
- Authors: Junqi You, Chieh Hubert Lin, Weijie Lyu, Zhengbo Zhang, Ming-Hsuan Yang,
- Abstract summary: InstaInpaint is a framework that produces 3D-scene inpainting from a 2D inpainting proposal within 0.4 seconds.<n>We analyze and identify several key designs that improve generalization, textural consistency, and geometric correctness.<n>InstaInpaint achieves a 1000x speed-up from prior methods while maintaining a state-of-the-art performance across two standard benchmarks.
- Score: 46.67494008720215
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in 3D scene reconstruction enable real-time viewing in virtual and augmented reality. To support interactive operations for better immersiveness, such as moving or editing objects, 3D scene inpainting methods are proposed to repair or complete the altered geometry. However, current approaches rely on lengthy and computationally intensive optimization, making them impractical for real-time or online applications. We propose InstaInpaint, a reference-based feed-forward framework that produces 3D-scene inpainting from a 2D inpainting proposal within 0.4 seconds. We develop a self-supervised masked-finetuning strategy to enable training of our custom large reconstruction model (LRM) on the large-scale dataset. Through extensive experiments, we analyze and identify several key designs that improve generalization, textural consistency, and geometric correctness. InstaInpaint achieves a 1000x speed-up from prior methods while maintaining a state-of-the-art performance across two standard benchmarks. Moreover, we show that InstaInpaint generalizes well to flexible downstream applications such as object insertion and multi-region inpainting. More video results are available at our project page: https://dhmbb2.github.io/InstaInpaint_page/.
Related papers
- WonderTurbo: Generating Interactive 3D World in 0.72 Seconds [29.61066704266084]
We introduce WonderTurbo, the first real-time interactive 3D scene generation framework capable of generating novel perspectives of 3D scenes within 0.72 seconds.<n>Specifically, WonderTurbo accelerates both geometric and appearance modeling in 3D scene generation.
arXiv Detail & Related papers (2025-04-03T04:10:47Z) - IMFine: 3D Inpainting via Geometry-guided Multi-view Refinement [15.206470606085341]
We introduce a novel approach that produces inpainted 3D scenes with consistent visual quality and coherent underlying geometry.<n>Specifically, we propose a robust 3D inpainting pipeline that incorporates geometric priors and a multi-view refinement network trained via test-time adaptation.<n>We develop a novel inpainting mask detection technique to derive targeted inpainting masks from object masks, boosting the performance in handling unconstrained scenes.
arXiv Detail & Related papers (2025-03-06T14:50:17Z) - Instant3dit: Multiview Inpainting for Fast Editing of 3D Objects [34.021032306348324]
We propose a generative technique to edit 3D shapes, represented as meshes, NeRFs, or Gaussian Splats, in approximately 3 seconds.<n>Our approach takes 3D generative editing from hours to seconds and produces higher-quality results compared to previous works.
arXiv Detail & Related papers (2024-11-30T15:58:33Z) - MVPaint: Synchronized Multi-View Diffusion for Painting Anything 3D [63.9188712646076]
Texturing is a 3D asset production, which enhances the visual appeal and visual appeal.
Despite recent advancements, methods often yield subpar results, primarily due to local discontinuities.
We propose a novel framework called MVPaint, which can generate high-resolution, seamless multiview consistency.
arXiv Detail & Related papers (2024-11-04T17:59:39Z) - MVInpainter: Learning Multi-View Consistent Inpainting to Bridge 2D and 3D Editing [90.30646271720919]
Novel View Synthesis (NVS) and 3D generation have recently achieved prominent improvements.
We propose MVInpainter, re-formulating the 3D editing as a multi-view 2D inpainting task.
MVInpainter partially inpaints multi-view images with the reference guidance rather than intractably generating an entirely novel view from scratch.
arXiv Detail & Related papers (2024-08-15T07:57:28Z) - NeRFiller: Completing Scenes via Generative 3D Inpainting [113.18181179986172]
We propose NeRFiller, an approach that completes missing portions of a 3D capture via generative 3D inpainting.
In contrast to related works, we focus on completing scenes rather than deleting foreground objects.
arXiv Detail & Related papers (2023-12-07T18:59:41Z) - EvaSurf: Efficient View-Aware Implicit Textured Surface Reconstruction [53.28220984270622]
3D reconstruction methods should generate high-fidelity results with 3D consistency in real-time.<n>Our method can reconstruct high-quality appearance and accurate mesh on both synthetic and real-world datasets.<n>Our method can be trained in just 1-2 hours using a single GPU and run on mobile devices at over 40 FPS (Frames Per Second)
arXiv Detail & Related papers (2023-11-16T11:30:56Z) - RIC: Rotate-Inpaint-Complete for Generalizable Scene Reconstruction [43.63574200858472]
General scene reconstruction refers to the task of estimating the full 3D geometry and texture of a scene containing previously unseen objects.
In this paper, we present a method for scene reconstruction by structurally breaking the problem into two steps: rendering novel views via inpainting and 2D to 3D scene lifting.
arXiv Detail & Related papers (2023-07-21T22:39:41Z) - Perceptual Artifacts Localization for Inpainting [60.5659086595901]
We propose a new learning task of automatic segmentation of inpainting perceptual artifacts.
We train advanced segmentation networks on a dataset to reliably localize inpainting artifacts within inpainted images.
We also propose a new evaluation metric called Perceptual Artifact Ratio (PAR), which is the ratio of objectionable inpainted regions to the entire inpainted area.
arXiv Detail & Related papers (2022-08-05T18:50:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.