RelightMaster: Precise Video Relighting with Multi-plane Light Images
- URL: http://arxiv.org/abs/2511.06271v1
- Date: Sun, 09 Nov 2025 08:12:09 GMT
- Title: RelightMaster: Precise Video Relighting with Multi-plane Light Images
- Authors: Weikang Bian, Xiaoyu Shi, Zhaoyang Huang, Jianhong Bai, Qinghe Wang, Xintao Wang, Pengfei Wan, Kun Gai, Hongsheng Li,
- Abstract summary: RelightMaster is a novel framework for accurate and controllable video relighting.<n>It generates physically plausible lighting and shadows and preserves original scene content.
- Score: 59.56389629981934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in diffusion models enable high-quality video generation and editing, but precise relighting with consistent video contents, which is critical for shaping scene atmosphere and viewer attention, remains unexplored. Mainstream text-to-video (T2V) models lack fine-grained lighting control due to text's inherent limitation in describing lighting details and insufficient pre-training on lighting-related prompts. Additionally, constructing high-quality relighting training data is challenging, as real-world controllable lighting data is scarce. To address these issues, we propose RelightMaster, a novel framework for accurate and controllable video relighting. First, we build RelightVideo, the first dataset with identical dynamic content under varying precise lighting conditions based on the Unreal Engine. Then, we introduce Multi-plane Light Image (MPLI), a novel visual prompt inspired by Multi-Plane Image (MPI). MPLI models lighting via K depth-aligned planes, representing 3D light source positions, intensities, and colors while supporting multi-source scenarios and generalizing to unseen light setups. Third, we design a Light Image Adapter that seamlessly injects MPLI into pre-trained Video Diffusion Transformers (DiT): it compresses MPLI via a pre-trained Video VAE and injects latent light features into DiT blocks, leveraging the base model's generative prior without catastrophic forgetting. Experiments show that RelightMaster generates physically plausible lighting and shadows and preserves original scene content. Demos are available at https://wkbian.github.io/Projects/RelightMaster/.
Related papers
- RelightAnyone: A Generalized Relightable 3D Gaussian Head Model [60.590427852071805]
3D Gaussian Splatting (3DGS) has become a standard approach to reconstruct and render photorealistic 3D head avatars.<n>Existing methods require subjects to be captured under complex time-multiplexed illumination, such as one-light-at-a-time (OLAT)
arXiv Detail & Related papers (2026-01-06T19:01:07Z) - UniLumos: Fast and Unified Image and Video Relighting with Physics-Plausible Feedback [31.03901228901908]
We present UniLumos, a unified relighting framework for both images and videos.<n>We explicitly align lighting effects with the scene structure, enhancing physical plausibility.<n>Experiments demonstrate that UniLumos achieves state-of-the-art relighting with significantly improved physical consistency.
arXiv Detail & Related papers (2025-11-03T15:41:41Z) - Lumen: Consistent Video Relighting and Harmonious Background Replacement with Video Generative Models [18.008901495139717]
We propose Lumen, an end-to-end video relighting framework developed on large-scale video generative models.<n>For the synthetic domain, we leverage advanced 3D rendering engine to curate video pairs in diverse environments.<n>For the realistic domain, we adapt a HDR-based lighting simulation to complement the lack of paired in-the-wild videos.
arXiv Detail & Related papers (2025-08-18T14:21:22Z) - LightSwitch: Multi-view Relighting with Material-guided Diffusion [73.5965603000002]
LightSwitch is a novel finetuned material-relighting diffusion framework.<n>We show that our 2D relighting prediction quality exceeds previous state-of-the-art relighting priors that directly relight from images.
arXiv Detail & Related papers (2025-08-08T17:59:52Z) - Light-A-Video: Training-free Video Relighting via Progressive Light Fusion [52.420894727186216]
Light-A-Video is a training-free approach to achieve temporally smooth video relighting.<n>Adapted from image relighting models, Light-A-Video introduces two key techniques to enhance lighting consistency.
arXiv Detail & Related papers (2025-02-12T17:24:19Z) - GenLit: Reformulating Single-Image Relighting as Video Generation [42.0880277180892]
We introduce GenLit, a framework that distills the ability of a graphics engine to perform light manipulation into a video-generation model.<n>We find that a model fine-tuned on only a small synthetic dataset generalizes to real-world scenes.
arXiv Detail & Related papers (2024-12-15T15:40:40Z) - LumiSculpt: Enabling Consistent Portrait Lighting in Video Generation [87.95655555555264]
Lighting plays a pivotal role in ensuring the naturalness and aesthetic quality of video generation.<n>LumiSculpt enables precise and consistent lighting control in T2V generation models.<n>LumiHuman is a new dataset for portrait lighting of images and videos.
arXiv Detail & Related papers (2024-10-30T12:44:08Z) - Zero-Reference Low-Light Enhancement via Physical Quadruple Priors [58.77377454210244]
We propose a new zero-reference low-light enhancement framework trainable solely with normal light images.
This framework is able to restore our illumination-invariant prior back to images, automatically achieving low-light enhancement.
arXiv Detail & Related papers (2024-03-19T17:36:28Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.