Relative Illumination Fields: Learning Medium and Light Independent Underwater Scenes
- URL: http://arxiv.org/abs/2504.10024v1
- Date: Mon, 14 Apr 2025 09:28:04 GMT
- Title: Relative Illumination Fields: Learning Medium and Light Independent Underwater Scenes
- Authors: Mengkun She, Felix Seegräber, David Nakath, Patricia Schöntag, Kevin Köser,
- Abstract summary: We address the challenge of constructing a consistent and photorealistic Neural Radiance Field in inhomogeneously illuminated, scattering environments.<n>We propose a novel illumination field locally attached to the camera, enabling the capture of uneven lighting effects within the viewing frustum.<n>We combine this with a volumetric medium representation to an overall method that effectively handles interaction between dynamic illumination field and static scattering medium.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the challenge of constructing a consistent and photorealistic Neural Radiance Field in inhomogeneously illuminated, scattering environments with unknown, co-moving light sources. While most existing works on underwater scene representation focus on a static homogeneous illumination, limited attention has been paid to scenarios such as when a robot explores water deeper than a few tens of meters, where sunlight becomes insufficient. To address this, we propose a novel illumination field locally attached to the camera, enabling the capture of uneven lighting effects within the viewing frustum. We combine this with a volumetric medium representation to an overall method that effectively handles interaction between dynamic illumination field and static scattering medium. Evaluation results demonstrate the effectiveness and flexibility of our approach.
Related papers
- Illuminant and light direction estimation using Wasserstein distance method [0.0]
This study introduces a novel method utilizing the Wasserstein distance to estimate illuminant and light direction in images.<n>Experiments on diverse images demonstrate the method's efficacy in detecting dominant light sources and estimating their directions.<n>The approach shows promise for applications in light source localization, image quality assessment, and object detection enhancement.
arXiv Detail & Related papers (2025-03-03T19:20:09Z) - SIRe-IR: Inverse Rendering for BRDF Reconstruction with Shadow and
Illumination Removal in High-Illuminance Scenes [51.50157919750782]
We present SIRe-IR, an implicit neural rendering inverse approach that decomposes the scene into environment map, albedo, and roughness.
By accurately modeling the indirect radiance field, normal, visibility, and direct light simultaneously, we are able to remove both shadows and indirect illumination.
Even in the presence of intense illumination, our method recovers high-quality albedo and roughness with no shadow interference.
arXiv Detail & Related papers (2023-10-19T10:44:23Z) - Advanced Underwater Image Restoration in Complex Illumination Conditions [12.270546709771926]
Most solutions focus on shallow water scenarios, where the scene is uniformly illuminated by the sunlight.
The vast majority of uncharted underwater terrain is located beyond meters depth where natural light scarce and artificial illumination needed.
We conduct extensive experiments on simulated seafloor and demonstrate our approach in restoring lighting and medium effects.
arXiv Detail & Related papers (2023-09-05T13:22:16Z) - Non-line-of-sight imaging in the presence of scattering media using
phasor fields [0.7999703756441756]
Non-line-of-sight (NLOS) imaging aims to reconstruct partially or completely occluded scenes.
We investigate current state-of-the-art NLOS imaging methods based on phasor fields to reconstruct scenes submerged in scattering media.
arXiv Detail & Related papers (2023-08-25T13:05:36Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - WildLight: In-the-wild Inverse Rendering with a Flashlight [77.31815397135381]
We propose a practical photometric solution for in-the-wild inverse rendering under unknown ambient lighting.
Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques.
arXiv Detail & Related papers (2023-03-24T17:59:56Z) - Neural Light Field Estimation for Street Scenes with Differentiable
Virtual Object Insertion [129.52943959497665]
Existing works on outdoor lighting estimation typically simplify the scene lighting into an environment map.
We propose a neural approach that estimates the 5D HDR light field from a single image.
We show the benefits of our AR object insertion in an autonomous driving application.
arXiv Detail & Related papers (2022-08-19T17:59:16Z) - Neural Point Light Fields [80.98651520818785]
We introduce Neural Point Light Fields that represent scenes implicitly with a light field living on a sparse point cloud.
These point light fields are as a function of the ray direction, and local point feature neighborhood, allowing us to interpolate the light field conditioned training images without dense object coverage and parallax.
arXiv Detail & Related papers (2021-12-02T18:20:10Z) - Neural Relightable Participating Media Rendering [26.431106015677]
We learn neural representations for participating media with a complete simulation of global illumination.
Our approach achieves superior visual quality and numerical performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-10-25T14:36:15Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.