X2Video: Adapting Diffusion Models for Multimodal Controllable Neural Video Rendering
- URL: http://arxiv.org/abs/2510.08530v1
- Date: Thu, 09 Oct 2025 17:50:31 GMT
- Title: X2Video: Adapting Diffusion Models for Multimodal Controllable Neural Video Rendering
- Authors: Zhitong Huang, Mohan Zhang, Renhan Wang, Rui Tang, Hao Zhu, Jing Liao,
- Abstract summary: X2Video is the first diffusion model for guided by intrinsic channels including albedo, normal, roughness, metallicity, and irradiance.<n>It supports intuitive multi-modal controls with reference images and text prompts for both global and local regions.<n>X2Video can produce long, temporally consistent, and photorealistic videos guided by intrinsic conditions.
- Score: 25.939894201559426
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present X2Video, the first diffusion model for rendering photorealistic videos guided by intrinsic channels including albedo, normal, roughness, metallicity, and irradiance, while supporting intuitive multi-modal controls with reference images and text prompts for both global and local regions. The intrinsic guidance allows accurate manipulation of color, material, geometry, and lighting, while reference images and text prompts provide intuitive adjustments in the absence of intrinsic information. To enable these functionalities, we extend the intrinsic-guided image generation model XRGB to video generation by employing a novel and efficient Hybrid Self-Attention, which ensures temporal consistency across video frames and also enhances fidelity to reference images. We further develop a Masked Cross-Attention to disentangle global and local text prompts, applying them effectively onto respective local and global regions. For generating long videos, our novel Recursive Sampling method incorporates progressive frame sampling, combining keyframe prediction and frame interpolation to maintain long-range temporal consistency while preventing error accumulation. To support the training of X2Video, we assembled a video dataset named InteriorVideo, featuring 1,154 rooms from 295 interior scenes, complete with reliable ground-truth intrinsic channel sequences and smooth camera trajectories. Both qualitative and quantitative evaluations demonstrate that X2Video can produce long, temporally consistent, and photorealistic videos guided by intrinsic conditions. Additionally, X2Video effectively accommodates multi-modal controls with reference images, global and local text prompts, and simultaneously supports editing on color, material, geometry, and lighting through parametric tuning. Project page: https://luckyhzt.github.io/x2video
Related papers
- V-RGBX: Video Editing with Accurate Controls over Intrinsic Properties [31.579053991884845]
We present V-RGBX, the first end-to-end framework for editable video editing.<n>V-RGBX unifies three key capabilities: video inverse rendering into intrinsic channels, video synthesis from these intrinsic representations, and editable-based video editing conditioned on intrinsic channels.<n>We show that V-RGBX produces temporally consistent, photorealistic videos while propagating intrinsic appearance edits across sequences in a physically plausible manner.
arXiv Detail & Related papers (2025-12-12T18:59:54Z) - CtrlVDiff: Controllable Video Generation via Unified Multimodal Video Diffusion [62.04833878126661]
We tackle the dual challenges of video understanding and controllable video generation within a unified diffusion framework.<n>We propose CtrlVDiff, a unified diffusion model trained with a Hybrid Modality Control Strategy (HMCS) that routes and fuses features from depth, normals, segmentation, edges, and graphics-based intrinsics (albedo, roughness, metallic)<n>Across understanding and generation benchmarks, CtrlVDiff delivers superior controllability and fidelity, enabling layer-wise edits (relighting, material adjustment, object insertion) and surpassing state-of-the-art baselines while remaining robust when some modalities are unavailable.
arXiv Detail & Related papers (2025-11-26T07:27:11Z) - VideoCanvas: Unified Video Completion from Arbitrary Spatiotemporal Patches via In-Context Conditioning [38.89828994130979]
We introduce the task of arbitrary-temporal video completion, where a video is generated from arbitrary, user-specified patches placed at any location and akin to painting on a video canvas.<n>This flexible unifies many existing controllable video generation tasks--including first-frame image-to-video, the inpainting, extension, and cohesive--under a single paradigm.<n>We develop VideoCanvas, a novel framework that adapts the In-Context Conditioning (ICC) paradigm to this fine-grained control task with zero new parameters.
arXiv Detail & Related papers (2025-10-09T17:58:59Z) - SketchVideo: Sketch-based Video Generation and Editing [51.99066098393491]
We aim to achieve sketch-based spatial and motion control for video generation and support fine-grained editing of real or synthetic videos.<n>Based on the DiT video generation model, we propose a memory-efficient control structure with sketch control blocks that predict residual features of skipped DiT blocks.<n>For sketch-based video editing, we design an additional video insertion module that maintains consistency between the newly edited content and the original video's spatial feature and dynamic motion.
arXiv Detail & Related papers (2025-03-30T02:44:09Z) - VideoPainter: Any-length Video Inpainting and Editing with Plug-and-Play Context Control [47.34885131252508]
Video inpainting aims to restore corrupted video content.<n>We propose a novel dual-stream paradigm VideoPainter to process masked videos.<n>We also introduce a novel target region ID resampling technique that enables any-length video inpainting.
arXiv Detail & Related papers (2025-03-07T17:59:46Z) - World-consistent Video Diffusion with Explicit 3D Modeling [67.39618291644673]
World-consistent Video Diffusion (WVD) is a novel framework that incorporates explicit 3D supervision using XYZ images.<n>We train a diffusion transformer to learn the joint distribution of RGB and XYZ frames.<n>WVD unifies tasks like single-image-to-3D generation, multi-view stereo, and camera-controlled video generation.
arXiv Detail & Related papers (2024-12-02T18:58:23Z) - LoopAnimate: Loopable Salient Object Animation [19.761865029125524]
LoopAnimate is a novel method for generating videos with consistent start and end frames.
It achieves state-of-the-art performance in both objective metrics, such as fidelity and temporal consistency, and subjective evaluation results.
arXiv Detail & Related papers (2024-04-14T07:36:18Z) - Lumiere: A Space-Time Diffusion Model for Video Generation [75.54967294846686]
We introduce a Space-Time U-Net architecture that generates the entire temporal duration of the video at once.
This is in contrast to existing video models which synthesize distants followed by temporal super-resolution.
By deploying both spatial and (importantly) temporal down- and up-sampling, our model learns to directly generate a full-frame-rate, low-resolution video.
arXiv Detail & Related papers (2024-01-23T18:05:25Z) - Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation [93.18163456287164]
This paper proposes a novel text-guided video-to-video translation framework to adapt image models to videos.
Our framework achieves global style and local texture temporal consistency at a low cost.
arXiv Detail & Related papers (2023-06-13T17:52:23Z) - Make-A-Video: Text-to-Video Generation without Text-Video Data [69.20996352229422]
Make-A-Video is an approach for translating the tremendous recent progress in Text-to-Image (T2I) generation to Text-to-Video (T2V)
We design a simple yet effective way to build on T2I models with novel and effective spatial-temporal modules.
In all aspects, spatial and temporal resolution, faithfulness to text, and quality, Make-A-Video sets the new state-of-the-art in text-to-video generation.
arXiv Detail & Related papers (2022-09-29T13:59:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.