UniRelight: Learning Joint Decomposition and Synthesis for Video Relighting
- URL: http://arxiv.org/abs/2506.15673v1
- Date: Wed, 18 Jun 2025 17:56:45 GMT
- Title: UniRelight: Learning Joint Decomposition and Synthesis for Video Relighting
- Authors: Kai He, Ruofan Liang, Jacob Munkberg, Jon Hasselgren, Nandita Vijaykumar, Alexander Keller, Sanja Fidler, Igor Gilitschenski, Zan Gojcic, Zian Wang,
- Abstract summary: We introduce a general-purpose approach that jointly estimates albedo and synthesizes relit outputs in a single pass.<n>Our model demonstrates strong generalization across diverse domains and surpasses previous methods in both visual fidelity and temporal consistency.
- Score: 85.27994475113056
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address the challenge of relighting a single image or video, a task that demands precise scene intrinsic understanding and high-quality light transport synthesis. Existing end-to-end relighting models are often limited by the scarcity of paired multi-illumination data, restricting their ability to generalize across diverse scenes. Conversely, two-stage pipelines that combine inverse and forward rendering can mitigate data requirements but are susceptible to error accumulation and often fail to produce realistic outputs under complex lighting conditions or with sophisticated materials. In this work, we introduce a general-purpose approach that jointly estimates albedo and synthesizes relit outputs in a single pass, harnessing the generative capabilities of video diffusion models. This joint formulation enhances implicit scene comprehension and facilitates the creation of realistic lighting effects and intricate material interactions, such as shadows, reflections, and transparency. Trained on synthetic multi-illumination data and extensive automatically labeled real-world videos, our model demonstrates strong generalization across diverse domains and surpasses previous methods in both visual fidelity and temporal consistency.
Related papers
- After the Party: Navigating the Mapping From Color to Ambient Lighting [48.01497878412971]
We introduce CL3AN, the first large-scale, high-resolution dataset of its kind.<n>We find that leading approaches often produce artifacts, such as illumination inconsistencies, texture leakage, and color distortion.<n>We achieve such a desired decomposition through a novel learning framework.
arXiv Detail & Related papers (2025-08-04T08:07:03Z) - MV-CoLight: Efficient Object Compositing with Consistent Lighting and Shadow Generation [19.46962637673285]
MV-CoLight is a framework for illumination-consistent object compositing in 2D and 3D scenes.<n>We employ a Hilbert curve-based mapping to align 2D image inputs with 3D Gaussian scene representations seamlessly.<n> Experiments demonstrate state-of-the-art harmonized results across standard benchmarks and our dataset.
arXiv Detail & Related papers (2025-05-27T17:53:02Z) - Comprehensive Relighting: Generalizable and Consistent Monocular Human Relighting and Harmonization [43.02033340663918]
Comprehensive Relighting is the first all-in-one approach that can both control and harmonize the lighting from an image or video of humans with arbitrary body parts from any scene.<n>In the experiments, Comprehensive Relighting shows a strong generalizability and lighting temporal coherence, outperforming existing image-based human relighting and harmonization methods.
arXiv Detail & Related papers (2025-04-03T20:10:50Z) - GroomLight: Hybrid Inverse Rendering for Relightable Human Hair Appearance Modeling [56.94251484447597]
GroomLight is a novel method for relightable hair appearance modeling from multi-view images.<n>We propose a hybrid inverse rendering pipeline to optimize both components, enabling high-fidelity relighting, view synthesis, and material editing.
arXiv Detail & Related papers (2025-03-13T17:43:12Z) - RelightVid: Temporal-Consistent Diffusion Model for Video Relighting [95.10341081549129]
RelightVid is a flexible framework for video relighting.<n>It can accept background video, text prompts, or environment maps as relighting conditions.<n>It achieves arbitrary video relighting with high temporal consistency without intrinsic decomposition.
arXiv Detail & Related papers (2025-01-27T18:59:57Z) - ReCap: Better Gaussian Relighting with Cross-Environment Captures [51.2614945509044]
We present ReCap, a multi-task system for accurate 3D object relighting in unseen environments.<n>Specifically, ReCap jointly optimize multiple lighting representations that share a common set of material attributes.<n>This naturally harmonizes a coherent set of lighting representations around the mutual material attributes, exploiting commonalities and differences across varied object appearances.<n>Together with a streamlined shading function and effective post-processing, ReCap outperforms all leading competitors on an expanded relighting benchmark.
arXiv Detail & Related papers (2024-12-10T14:15:32Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Self-supervised Light Field View Synthesis Using Cycle Consistency [22.116100469958436]
We propose a self-supervised light field view synthesis framework with cycle consistency.
A cycle consistency constraint is used to build mapping enforcing the generated views to be consistent with the input views.
Results show it outperforms state-of-the-art light field view synthesis methods, especially when generating multiple intermediate views.
arXiv Detail & Related papers (2020-08-12T03:20:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.