Mirage: One-Step Video Diffusion for Photorealistic and Coherent Asset Editing in Driving Scenes
- URL: http://arxiv.org/abs/2512.24227v1
- Date: Tue, 30 Dec 2025 13:40:23 GMT
- Title: Mirage: One-Step Video Diffusion for Photorealistic and Coherent Asset Editing in Driving Scenes
- Authors: Shuyun Wang, Haiyang Sun, Bing Wang, Hangjun Ye, Xin Yu,
- Abstract summary: Vision-centric autonomous driving systems rely on diverse and scalable training data to achieve robust performance.<n>Video object editing offers a promising path for data augmentation.<n>Existing methods often struggle to maintain both high visual fidelity and temporal coherence.<n>Mirage builds upon a text-to-video diffusion prior to ensure temporal consistency across frames.
- Score: 11.72942978943414
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vision-centric autonomous driving systems rely on diverse and scalable training data to achieve robust performance. While video object editing offers a promising path for data augmentation, existing methods often struggle to maintain both high visual fidelity and temporal coherence. In this work, we propose \textbf{Mirage}, a one-step video diffusion model for photorealistic and coherent asset editing in driving scenes. Mirage builds upon a text-to-video diffusion prior to ensure temporal consistency across frames. However, 3D causal variational autoencoders often suffer from degraded spatial fidelity due to compression, and directly passing 3D encoder features to decoder layers breaks temporal causality. To address this, we inject temporally agnostic latents from a pretrained 2D encoder into the 3D decoder to restore detail while preserving causal structures. Furthermore, because scene objects and inserted assets are optimized under different objectives, their Gaussians exhibit a distribution mismatch that leads to pose misalignment. To mitigate this, we introduce a two-stage data alignment strategy combining coarse 3D alignment and fine 2D refinement, thereby improving alignment and providing cleaner supervision. Extensive experiments demonstrate that Mirage achieves high realism and temporal consistency across diverse editing scenarios. Beyond asset editing, Mirage can also generalize to other video-to-video translation tasks, serving as a reliable baseline for future research. Our code is available at https://github.com/wm-research/mirage.
Related papers
- PatchAlign3D: Local Feature Alignment for Dense 3D Shape understanding [67.15800065888887]
Current foundation models for 3D shapes excel at global tasks (retrieval, classification) but transfer poorly to local part-level reasoning.<n>We introduce an encoder-only 3D model that produces language-aligned patch-level features directly from point clouds.<n>Our 3D encoder achieves zero-shot 3D part segmentation with fast single-pass inference without any test-time multi-view rendering.
arXiv Detail & Related papers (2026-01-05T18:55:45Z) - DragMesh: Interactive 3D Generation Made Easy [12.832539752284466]
DragMesh is a robust framework for real-time interactive 3D articulation.<n>Our core contribution is a novel decoupled kinematic reasoning and motion generation framework.
arXiv Detail & Related papers (2025-12-06T13:10:44Z) - DisCo3D: Distilling Multi-View Consistency for 3D Scene Editing [12.383291424229448]
We propose textbfDisCo3D, a novel framework that distills 3D consistency priors into a 2D editor.<n>Our method first fine-tunes a 3D generator using multi-view inputs for scene adaptation, then trains a 2D editor through consistency distillation.<n> Experimental results show DisCo3D achieves stable multi-view consistency and outperforms state-of-the-art methods in editing quality.
arXiv Detail & Related papers (2025-08-03T09:27:41Z) - From Gallery to Wrist: Realistic 3D Bracelet Insertion in Videos [8.444819892052958]
2D diffusion models have shown promise for producing photorealistic edits.<n>Traditional 3D rendering methods excel in spatial and temporal consistency but fall short in achieving photorealistic lighting.<n>This is the first approach to synergize 3D rendering and 2D diffusion for video object insertion.
arXiv Detail & Related papers (2025-07-27T15:49:07Z) - GaussVideoDreamer: 3D Scene Generation with Video Diffusion and Inconsistency-Aware Gaussian Splatting [17.17292309504131]
GaussVideoDreamer advances generative multimedia approaches by bridging the gap between image, video, and 3D generation.<n>Our approach achieves 32% higher LLaVA-IQA scores and at least 2x speedup compared to existing methods.
arXiv Detail & Related papers (2025-04-14T09:04:01Z) - VideoLifter: Lifting Videos to 3D with Fast Hierarchical Stereo Alignment [54.66217340264935]
VideoLifter is a novel video-to-3D pipeline that leverages a local-to-global strategy on a fragment basis.<n>It significantly accelerates the reconstruction process, reducing training time by over 82% while holding better visual quality than current SOTA methods.
arXiv Detail & Related papers (2025-01-03T18:52:36Z) - Large Motion Video Autoencoding with Cross-modal Video VAE [52.13379965800485]
Video Variational Autoencoder (VAE) is essential for reducing video redundancy and facilitating efficient video generation.<n>Existing Video VAEs have begun to address temporal compression; however, they often suffer from inadequate reconstruction performance.<n>We present a novel and powerful video autoencoder capable of high-fidelity video encoding.
arXiv Detail & Related papers (2024-12-23T18:58:24Z) - LiftImage3D: Lifting Any Single Image to 3D Gaussians with Video Generation Priors [107.83398512719981]
Single-image 3D reconstruction remains a fundamental challenge in computer vision.<n>Recent advances in Latent Video Diffusion Models offer promising 3D priors learned from large-scale video data.<n>We propose LiftImage3D, a framework that effectively releases LVDMs' generative priors while ensuring 3D consistency.
arXiv Detail & Related papers (2024-12-12T18:58:42Z) - Enhancing Temporal Consistency in Video Editing by Reconstructing Videos with 3D Gaussian Splatting [94.84688557937123]
Video-3DGS is a 3D Gaussian Splatting (3DGS)-based video refiner designed to enhance temporal consistency in zero-shot video editors.<n>Our approach utilizes a two-stage 3D Gaussian optimizing process tailored for editing dynamic monocular videos.<n>It enhances video editing by ensuring temporal consistency across 58 dynamic monocular videos.
arXiv Detail & Related papers (2024-06-04T17:57:37Z) - Human Mesh Recovery from Multiple Shots [85.18244937708356]
We propose a framework for improved 3D reconstruction and mining of long sequences with pseudo ground truth 3D human mesh.
We show that the resulting data is beneficial in the training of various human mesh recovery models.
The tools we develop open the door to processing and analyzing in 3D content from a large library of edited media.
arXiv Detail & Related papers (2020-12-17T18:58:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.