ReLumix: Extending Image Relighting to Video via Video Diffusion Models
- URL: http://arxiv.org/abs/2509.23769v1
- Date: Sun, 28 Sep 2025 09:35:33 GMT
- Title: ReLumix: Extending Image Relighting to Video via Video Diffusion Models
- Authors: Lezhong Wang, Shutong Jin, Ruiqi Cui, Anders Bjorholm Dahl, Jeppe Revall Frisvad, Siavash Bigdeli,
- Abstract summary: Controlling illumination during video post-production is a crucial yet elusive goal in computational photography.<n>This paper introduces ReLumix, a novel framework that decouples the relighting from temporal synthesis.<n>Although trained on synthetic data, ReLumix shows competitive generalization to real-world videos.
- Score: 5.890782804843724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Controlling illumination during video post-production is a crucial yet elusive goal in computational photography. Existing methods often lack flexibility, restricting users to certain relighting models. This paper introduces ReLumix, a novel framework that decouples the relighting algorithm from temporal synthesis, thereby enabling any image relighting technique to be seamlessly applied to video. Our approach reformulates video relighting into a simple yet effective two-stage process: (1) an artist relights a single reference frame using any preferred image-based technique (e.g., Diffusion Models, physics-based renderers); and (2) a fine-tuned stable video diffusion (SVD) model seamlessly propagates this target illumination throughout the sequence. To ensure temporal coherence and prevent artifacts, we introduce a gated cross-attention mechanism for smooth feature blending and a temporal bootstrapping strategy that harnesses SVD's powerful motion priors. Although trained on synthetic data, ReLumix shows competitive generalization to real-world videos. The method demonstrates significant improvements in visual fidelity, offering a scalable and versatile solution for dynamic lighting control.
Related papers
- 3One2: One-step Regression Plus One-step Diffusion for One-hot Modulation in Dual-path Video Snapshot Compressive Imaging [15.082139132074294]
Video snapshot imaging (SCI) captures dynamic scene sequences through a two-dimensional (2D) snapshot.<n>One-hot modulation, activating only one sub-frame per pixel, provides a promising solution for achieving perfect temporal decoupling.<n>We propose an algorithm specifically designed for one-hot masks.
arXiv Detail & Related papers (2025-12-19T13:44:36Z) - Light-X: Generative 4D Video Rendering with Camera and Illumination Control [52.87059646145144]
Light-X is a video generation framework that enables controllable rendering from monocular videos with both viewpoint and illumination control.<n>To address the lack of paired multi-view and multi-illumination videos, we introduce Light-Syn, a degradation-based pipeline with inverse-mapping.
arXiv Detail & Related papers (2025-12-04T18:59:57Z) - TC-Light: Temporally Coherent Generative Rendering for Realistic World Transfer [47.22201704648345]
Illumination and texture editing are critical dimensions for world-to-world transfer.<n>Existing techniques generatively re-render the input video to realize the transfer, such as video relighting models and conditioned world generation models.<n>We propose TC-Light, a novel generative computation to overcome these problems.
arXiv Detail & Related papers (2025-06-23T17:59:58Z) - UniRelight: Learning Joint Decomposition and Synthesis for Video Relighting [85.27994475113056]
We introduce a general-purpose approach that jointly estimates albedo and synthesizes relit outputs in a single pass.<n>Our model demonstrates strong generalization across diverse domains and surpasses previous methods in both visual fidelity and temporal consistency.
arXiv Detail & Related papers (2025-06-18T17:56:45Z) - Light-A-Video: Training-free Video Relighting via Progressive Light Fusion [52.420894727186216]
Light-A-Video is a training-free approach to achieve temporally smooth video relighting.<n>Adapted from image relighting models, Light-A-Video introduces two key techniques to enhance lighting consistency.
arXiv Detail & Related papers (2025-02-12T17:24:19Z) - RelightVid: Temporal-Consistent Diffusion Model for Video Relighting [95.10341081549129]
RelightVid is a flexible framework for video relighting.<n>It can accept background video, text prompts, or environment maps as relighting conditions.<n>It achieves arbitrary video relighting with high temporal consistency without intrinsic decomposition.
arXiv Detail & Related papers (2025-01-27T18:59:57Z) - Real-time 3D-aware Portrait Video Relighting [89.41078798641732]
We present the first real-time 3D-aware method for relighting in-the-wild videos of talking faces based on Neural Radiance Fields (NeRF)
We infer an albedo tri-plane, as well as a shading tri-plane based on a desired lighting condition for each video frame with fast dual-encoders.
Our method runs at 32.98 fps on consumer-level hardware and achieves state-of-the-art results in terms of reconstruction quality, lighting error, lighting instability, temporal consistency and inference speed.
arXiv Detail & Related papers (2024-10-24T01:34:11Z) - Low-Light Video Enhancement with Synthetic Event Guidance [188.7256236851872]
We use synthetic events from multiple frames to guide the enhancement and restoration of low-light videos.
Our method outperforms existing low-light video or single image enhancement approaches on both synthetic and real LLVE datasets.
arXiv Detail & Related papers (2022-08-23T14:58:29Z) - Neural Video Portrait Relighting in Real-time via Consistency Modeling [41.04622998356025]
We propose a neural approach for real-time, high-quality and coherent video portrait relighting.
We propose a hybrid structure and lighting disentanglement in an encoder-decoder architecture.
We also propose a lighting sampling strategy to model the illumination consistency and mutation for natural portrait light manipulation in real-world.
arXiv Detail & Related papers (2021-04-01T14:13:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.