Advanced Underwater Image Restoration in Complex Illumination Conditions
- URL: http://arxiv.org/abs/2309.02217v1
- Date: Tue, 5 Sep 2023 13:22:16 GMT
- Title: Advanced Underwater Image Restoration in Complex Illumination Conditions
- Authors: Yifan Song, Mengkun She, Kevin K\"oser
- Abstract summary: Most solutions focus on shallow water scenarios, where the scene is uniformly illuminated by the sunlight.
The vast majority of uncharted underwater terrain is located beyond meters depth where natural light scarce and artificial illumination needed.
We conduct extensive experiments on simulated seafloor and demonstrate our approach in restoring lighting and medium effects.
- Score: 12.270546709771926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Underwater image restoration has been a challenging problem for decades since
the advent of underwater photography. Most solutions focus on shallow water
scenarios, where the scene is uniformly illuminated by the sunlight. However,
the vast majority of uncharted underwater terrain is located beyond 200 meters
depth where natural light is scarce and artificial illumination is needed. In
such cases, light sources co-moving with the camera, dynamically change the
scene appearance, which make shallow water restoration methods inadequate. In
particular for multi-light source systems (composed of dozens of LEDs
nowadays), calibrating each light is time-consuming, error-prone and tedious,
and we observe that only the integrated illumination within the viewing volume
of the camera is critical, rather than the individual light sources. The key
idea of this paper is therefore to exploit the appearance changes of objects or
the seafloor, when traversing the viewing frustum of the camera. Through new
constraints assuming Lambertian surfaces, corresponding image pixels constrain
the light field in front of the camera, and for each voxel a signal factor and
a backscatter value are stored in a volumetric grid that can be used for very
efficient image restoration of camera-light platforms, which facilitates
consistently texturing large 3D models and maps that would otherwise be
dominated by lighting and medium artifacts. To validate the effectiveness of
our approach, we conducted extensive experiments on simulated and real-world
datasets. The results of these experiments demonstrate the robustness of our
approach in restoring the true albedo of objects, while mitigating the
influence of lighting and medium effects. Furthermore, we demonstrate our
approach can be readily extended to other scenarios, including in-air imaging
with artificial illumination or other similar cases.
Related papers
- Photorealistic Object Insertion with Diffusion-Guided Inverse Rendering [56.68286440268329]
correct insertion of virtual objects in images of real-world scenes requires a deep understanding of the scene's lighting, geometry and materials.
We propose using a personalized large diffusion model as guidance to a physically based inverse rendering process.
Our method recovers scene lighting and tone-mapping parameters, allowing the photorealistic composition of arbitrary virtual objects in single frames or videos of indoor or outdoor scenes.
arXiv Detail & Related papers (2024-08-19T05:15:45Z) - Relightable Neural Actor with Intrinsic Decomposition and Pose Control [80.06094206522668]
We propose Relightable Neural Actor, a new video-based method for learning a pose-driven neural human model that can be relighted.
For training, our method solely requires a multi-view recording of the human under a known, but static lighting condition.
To evaluate our approach in real-world scenarios, we collect a new dataset with four identities recorded under different light conditions, indoors and outdoors.
arXiv Detail & Related papers (2023-12-18T14:30:13Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - WildLight: In-the-wild Inverse Rendering with a Flashlight [77.31815397135381]
We propose a practical photometric solution for in-the-wild inverse rendering under unknown ambient lighting.
Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques.
arXiv Detail & Related papers (2023-03-24T17:59:56Z) - SUCRe: Leveraging Scene Structure for Underwater Color Restoration [1.9490160607392462]
We introduce SUCRe, a novel method that exploits the scene's 3D structure for underwater color restoration.
We conduct extensive quantitative and qualitative analyses of our approach in a variety of scenarios ranging from natural light to deep-sea environments.
arXiv Detail & Related papers (2022-12-18T16:53:13Z) - Robustly Removing Deep Sea Lighting Effects for Visual Mapping of
Abyssal Plains [3.566117940176302]
The majority of Earth's surface lies deep in the oceans, where no surface light reaches.
Visual mapping, including image matching and surface albedo estimation, severely suffers from the effects that co-moving light sources produce.
We present a practical approach to estimating and compensating these lighting effects on predominantly homogeneous, flat seafloor regions.
arXiv Detail & Related papers (2021-10-01T15:28:07Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z) - Towards Geometry Guided Neural Relighting with Flash Photography [26.511476565209026]
We propose a framework for image relighting from a single flash photograph with its corresponding depth map using deep learning.
We experimentally validate the advantage of our geometry guided approach over state-of-the-art image-based approaches in intrinsic image decomposition and image relighting.
arXiv Detail & Related papers (2020-08-12T08:03:28Z) - Deep Reflectance Volumes: Relightable Reconstructions from Multi-View
Photometric Images [59.53382863519189]
We present a deep learning approach to reconstruct scene appearance from unstructured images captured under collocated point lighting.
At the heart of Deep Reflectance Volumes is a novel volumetric scene representation consisting of opacity, surface normal and reflectance voxel grids.
We show that our learned reflectance volumes are editable, allowing for modifying the materials of the captured scenes.
arXiv Detail & Related papers (2020-07-20T05:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.