ReCap: Better Gaussian Relighting with Cross-Environment Captures
- URL: http://arxiv.org/abs/2412.07534v3
- Date: Thu, 27 Mar 2025 09:50:26 GMT
- Title: ReCap: Better Gaussian Relighting with Cross-Environment Captures
- Authors: Jingzhi Li, Zongwei Wu, Eduard Zamfir, Radu Timofte,
- Abstract summary: We present ReCap, a multi-task system for accurate 3D object relighting in unseen environments.<n>Specifically, ReCap jointly optimize multiple lighting representations that share a common set of material attributes.<n>This naturally harmonizes a coherent set of lighting representations around the mutual material attributes, exploiting commonalities and differences across varied object appearances.<n>Together with a streamlined shading function and effective post-processing, ReCap outperforms all leading competitors on an expanded relighting benchmark.
- Score: 51.2614945509044
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate 3D objects relighting in diverse unseen environments is crucial for realistic virtual object placement. Due to the albedo-lighting ambiguity, existing methods often fall short in producing faithful relights. Without proper constraints, observed training views can be explained by numerous combinations of lighting and material attributes, lacking physical correspondence with the actual environment maps used for relighting. In this work, we present ReCap, treating cross-environment captures as multi-task target to provide the missing supervision that cuts through the entanglement. Specifically, ReCap jointly optimizes multiple lighting representations that share a common set of material attributes. This naturally harmonizes a coherent set of lighting representations around the mutual material attributes, exploiting commonalities and differences across varied object appearances. Such coherence enables physically sound lighting reconstruction and robust material estimation - both essential for accurate relighting. Together with a streamlined shading function and effective post-processing, ReCap outperforms all leading competitors on an expanded relighting benchmark.
Related papers
- After the Party: Navigating the Mapping From Color to Ambient Lighting [48.01497878412971]
We introduce CL3AN, the first large-scale, high-resolution dataset of its kind.<n>We find that leading approaches often produce artifacts, such as illumination inconsistencies, texture leakage, and color distortion.<n>We achieve such a desired decomposition through a novel learning framework.
arXiv Detail & Related papers (2025-08-04T08:07:03Z) - UniRelight: Learning Joint Decomposition and Synthesis for Video Relighting [85.27994475113056]
We introduce a general-purpose approach that jointly estimates albedo and synthesizes relit outputs in a single pass.<n>Our model demonstrates strong generalization across diverse domains and surpasses previous methods in both visual fidelity and temporal consistency.
arXiv Detail & Related papers (2025-06-18T17:56:45Z) - RRM: Relightable assets using Radiance guided Material extraction [5.175522626712229]
We propose a method that can extract materials, geometry, and environment lighting of a scene even in the presence of highly reflective objects.
Our method consists of a physically-aware radiance field representation that informs physically-based parameters, and an expressive environment light structure based on a Laplacian Pyramid.
We demonstrate that our contributions outperform the state-of-the-art on parameter retrieval tasks, leading to high-fidelity relighting and novel view synthesis on surfacic scenes.
arXiv Detail & Related papers (2024-07-08T21:10:31Z) - GS-Phong: Meta-Learned 3D Gaussians for Relightable Novel View Synthesis [63.5925701087252]
We propose a novel method for representing a scene illuminated by a point light using a set of relightable 3D Gaussian points.
Inspired by the Blinn-Phong model, our approach decomposes the scene into ambient, diffuse, and specular components.
To facilitate the decomposition of geometric information independent of lighting conditions, we introduce a novel bilevel optimization-based meta-learning framework.
arXiv Detail & Related papers (2024-05-31T13:48:54Z) - DeferredGS: Decoupled and Editable Gaussian Splatting with Deferred Shading [50.331929164207324]
We introduce DeferredGS, a method for decoupling and editing the Gaussian splatting representation using deferred shading.
Both qualitative and quantitative experiments demonstrate the superior performance of DeferredGS in novel view and editing tasks.
arXiv Detail & Related papers (2024-04-15T01:58:54Z) - SIR: Multi-view Inverse Rendering with Decomposable Shadow for Indoor Scenes [0.88756501225368]
We propose SIR, an efficient method to decompose differentiable shadows for inverse rendering on indoor scenes using multi-view data.
SIR explicitly learns shadows for enhanced realism in material estimation under unknown light positions.
The significant decomposing ability of SIR enables sophisticated editing capabilities like free-view relighting, object insertion, and material replacement.
arXiv Detail & Related papers (2024-02-09T01:48:44Z) - Towards Practical Capture of High-Fidelity Relightable Avatars [60.25823986199208]
TRAvatar is trained with dynamic image sequences captured in a Light Stage under varying lighting conditions.
It can predict the appearance in real-time with a single forward pass, achieving high-quality relighting effects.
Our framework achieves superior performance for photorealistic avatar animation and relighting.
arXiv Detail & Related papers (2023-09-08T10:26:29Z) - Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - LitAR: Visually Coherent Lighting for Mobile Augmented Reality [24.466149552743516]
We present the design and implementation of a lighting reconstruction framework called LitAR.
LitAR addresses several challenges of supporting lighting information for mobile AR.
arXiv Detail & Related papers (2023-01-15T20:47:38Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Illumination Normalization by Partially Impossible Encoder-Decoder Cost
Function [13.618797548020462]
We introduce a new strategy for the cost function formulation of encoder-decoder networks to average out all the unimportant information in the input images.
Our method exploits the availability of identical sceneries under different illumination and environmental conditions.
Its applicability is assessed on three publicly available datasets.
arXiv Detail & Related papers (2020-11-06T15:25:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.