ReCap: Better Gaussian Relighting with Cross-Environment Captures
- URL: http://arxiv.org/abs/2412.07534v1
- Date: Tue, 10 Dec 2024 14:15:32 GMT
- Title: ReCap: Better Gaussian Relighting with Cross-Environment Captures
- Authors: Jingzhi Li, Zongwei Wu, Eduard Zamfir, Radu Timofte,
- Abstract summary: In this work, we present ReCap, treating cross-environment captures as multi-task target to provide the missing supervision that cuts through the entanglement.
Specifically, ReCap jointly optimize multiple lighting representations that share a common set of material attributes.
This naturally harmonizes a coherent set of lighting representations around the mutual material attributes, exploiting commonalities and differences across varied object appearances.
Together with a streamlined shading function and effective post-processing, ReCap outperforms the leading competitor by 3.4 dB in PSNR on an expanded relighting benchmark.
- Score: 51.2614945509044
- License:
- Abstract: Accurate 3D objects relighting in diverse unseen environments is crucial for realistic virtual object placement. Due to the albedo-lighting ambiguity, existing methods often fall short in producing faithful relights. Without proper constraints, observed training views can be explained by numerous combinations of lighting and material attributes, lacking physical correspondence with the actual environment maps used for relighting. In this work, we present ReCap, treating cross-environment captures as multi-task target to provide the missing supervision that cuts through the entanglement. Specifically, ReCap jointly optimizes multiple lighting representations that share a common set of material attributes. This naturally harmonizes a coherent set of lighting representations around the mutual material attributes, exploiting commonalities and differences across varied object appearances. Such coherence enables physically sound lighting reconstruction and robust material estimation - both essential for accurate relighting. Together with a streamlined shading function and effective post-processing, ReCap outperforms the leading competitor by 3.4 dB in PSNR on an expanded relighting benchmark.
Related papers
- IDArb: Intrinsic Decomposition for Arbitrary Number of Input Views and Illuminations [64.07859467542664]
Capturing geometric and material information from images remains a fundamental challenge in computer vision and graphics.
Traditional optimization-based methods often require hours of computational time to reconstruct geometry, material properties, and environmental lighting from dense multi-view inputs.
We introduce IDArb, a diffusion-based model designed to perform intrinsic decomposition on an arbitrary number of images under varying illuminations.
arXiv Detail & Related papers (2024-12-16T18:52:56Z) - PBIR-NIE: Glossy Object Capture under Non-Distant Lighting [30.325872237020395]
Glossy objects present a significant challenge for 3D reconstruction from multi-view input images under natural lighting.
We introduce PBIR-NIE, an inverse rendering framework designed to holistically capture the geometry, material attributes, and surrounding illumination of such objects.
arXiv Detail & Related papers (2024-08-13T13:26:24Z) - RRM: Relightable assets using Radiance guided Material extraction [5.175522626712229]
We propose a method that can extract materials, geometry, and environment lighting of a scene even in the presence of highly reflective objects.
Our method consists of a physically-aware radiance field representation that informs physically-based parameters, and an expressive environment light structure based on a Laplacian Pyramid.
We demonstrate that our contributions outperform the state-of-the-art on parameter retrieval tasks, leading to high-fidelity relighting and novel view synthesis on surfacic scenes.
arXiv Detail & Related papers (2024-07-08T21:10:31Z) - GS-Phong: Meta-Learned 3D Gaussians for Relightable Novel View Synthesis [63.5925701087252]
We propose a novel method for representing a scene illuminated by a point light using a set of relightable 3D Gaussian points.
Inspired by the Blinn-Phong model, our approach decomposes the scene into ambient, diffuse, and specular components.
To facilitate the decomposition of geometric information independent of lighting conditions, we introduce a novel bilevel optimization-based meta-learning framework.
arXiv Detail & Related papers (2024-05-31T13:48:54Z) - DeferredGS: Decoupled and Editable Gaussian Splatting with Deferred Shading [50.331929164207324]
We introduce DeferredGS, a method for decoupling and editing the Gaussian splatting representation using deferred shading.
Both qualitative and quantitative experiments demonstrate the superior performance of DeferredGS in novel view and editing tasks.
arXiv Detail & Related papers (2024-04-15T01:58:54Z) - SIR: Multi-view Inverse Rendering with Decomposable Shadow for Indoor Scenes [0.88756501225368]
We propose SIR, an efficient method to decompose differentiable shadows for inverse rendering on indoor scenes using multi-view data.
SIR explicitly learns shadows for enhanced realism in material estimation under unknown light positions.
The significant decomposing ability of SIR enables sophisticated editing capabilities like free-view relighting, object insertion, and material replacement.
arXiv Detail & Related papers (2024-02-09T01:48:44Z) - Towards Practical Capture of High-Fidelity Relightable Avatars [60.25823986199208]
TRAvatar is trained with dynamic image sequences captured in a Light Stage under varying lighting conditions.
It can predict the appearance in real-time with a single forward pass, achieving high-quality relighting effects.
Our framework achieves superior performance for photorealistic avatar animation and relighting.
arXiv Detail & Related papers (2023-09-08T10:26:29Z) - LitAR: Visually Coherent Lighting for Mobile Augmented Reality [24.466149552743516]
We present the design and implementation of a lighting reconstruction framework called LitAR.
LitAR addresses several challenges of supporting lighting information for mobile AR.
arXiv Detail & Related papers (2023-01-15T20:47:38Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Illumination Normalization by Partially Impossible Encoder-Decoder Cost
Function [13.618797548020462]
We introduce a new strategy for the cost function formulation of encoder-decoder networks to average out all the unimportant information in the input images.
Our method exploits the availability of identical sceneries under different illumination and environmental conditions.
Its applicability is assessed on three publicly available datasets.
arXiv Detail & Related papers (2020-11-06T15:25:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.