Robustly Removing Deep Sea Lighting Effects for Visual Mapping of
Abyssal Plains
- URL: http://arxiv.org/abs/2110.00480v1
- Date: Fri, 1 Oct 2021 15:28:07 GMT
- Title: Robustly Removing Deep Sea Lighting Effects for Visual Mapping of
Abyssal Plains
- Authors: Kevin K\"oser, Yifan Song, Lasse Petersen, Emanuel Wenzlaff, Felix
Woelk
- Abstract summary: The majority of Earth's surface lies deep in the oceans, where no surface light reaches.
Visual mapping, including image matching and surface albedo estimation, severely suffers from the effects that co-moving light sources produce.
We present a practical approach to estimating and compensating these lighting effects on predominantly homogeneous, flat seafloor regions.
- Score: 3.566117940176302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The majority of Earth's surface lies deep in the oceans, where no surface
light reaches. Robots diving down to great depths must bring light sources that
create moving illumination patterns in the darkness, such that the same 3D
point appears with different color in each image. On top, scattering and
attenuation of light in the water makes images appear foggy and typically
blueish, the degradation depending on each pixel's distance to its observed
seafloor patch, on the local composition of the water and the relative poses
and cones of the light sources. Consequently, visual mapping, including image
matching and surface albedo estimation, severely suffers from the effects that
co-moving light sources produce, and larger mosaic maps from photos are often
dominated by lighting effects that obscure the actual seafloor structure. In
this contribution a practical approach to estimating and compensating these
lighting effects on predominantly homogeneous, flat seafloor regions, as can be
found in the Abyssal plains of our oceans, is presented. The method is
essentially parameter-free and intended as a preprocessing step to facilitate
visual mapping, but already produces convincing lighting artefact compensation
up to a global white balance factor. It does not require to be trained
beforehand on huge sets of annotated images, which are not available for the
deep sea. Rather, we motivate our work by physical models of light propagation,
perform robust statistics-based estimates of additive and multiplicative
nuisances that avoid explicit parameters for light, camera, water or scene,
discuss the breakdown point of the algorithms and show results on imagery
captured by robots in several kilometer water depth.
Related papers
- Enhancing Underwater Imaging with 4-D Light Fields: Dataset and Method [77.80712860663886]
4-D light fields (LFs) enhance underwater imaging plagued by light absorption, scattering, and other challenges.
We propose a progressive framework for underwater 4-D LF image enhancement and depth estimation.
We construct the first 4-D LF-based underwater image dataset for quantitative evaluation and supervised training of learning-based methods.
arXiv Detail & Related papers (2024-08-30T15:06:45Z) - Dual High-Order Total Variation Model for Underwater Image Restoration [13.789310785350484]
Underwater image enhancement and restoration (UIER) is one crucial mode to improve the visual quality of underwater images.
We propose an effective variational framework based on an extended underwater image formation model (UIFM)
In our proposed framework, the weight factors-based color compensation is combined with the color balance to compensate for the attenuated color channels and remove the color cast.
arXiv Detail & Related papers (2024-07-20T13:06:37Z) - Advanced Underwater Image Restoration in Complex Illumination Conditions [12.270546709771926]
Most solutions focus on shallow water scenarios, where the scene is uniformly illuminated by the sunlight.
The vast majority of uncharted underwater terrain is located beyond meters depth where natural light scarce and artificial illumination needed.
We conduct extensive experiments on simulated seafloor and demonstrate our approach in restoring lighting and medium effects.
arXiv Detail & Related papers (2023-09-05T13:22:16Z) - Nighttime Smartphone Reflective Flare Removal Using Optical Center
Symmetry Prior [81.64647648269889]
Reflective flare is a phenomenon that occurs when light reflects inside lenses, causing bright spots or a "ghosting effect" in photos.
We propose an optical center symmetry prior, which suggests that the reflective flare and light source are always symmetrical around the lens's optical center.
We create the first reflective flare removal dataset called BracketFlare, which contains diverse and realistic reflective flare patterns.
arXiv Detail & Related papers (2023-03-27T09:44:40Z) - Unpaired Overwater Image Defogging Using Prior Map Guided CycleGAN [60.257791714663725]
We propose a Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater scenes.
The proposed method outperforms the state-of-the-art supervised, semi-supervised, and unsupervised defogging approaches.
arXiv Detail & Related papers (2022-12-23T03:00:28Z) - Seafloor-Invariant Caustics Removal from Underwater Imagery [0.0]
Caustics are complex physical phenomena resulting from the projection of light rays being refracted by the wavy surface.
In this work, we propose a novel method for correcting the effects of caustics on shallow underwater imagery.
In particular, the developed method employs deep learning architectures in order to classify image pixels to "non-caustics" and "caustics"
arXiv Detail & Related papers (2022-12-20T11:11:02Z) - SUCRe: Leveraging Scene Structure for Underwater Color Restoration [1.9490160607392462]
We introduce SUCRe, a novel method that exploits the scene's 3D structure for underwater color restoration.
We conduct extensive quantitative and qualitative analyses of our approach in a variety of scenarios ranging from natural light to deep-sea environments.
arXiv Detail & Related papers (2022-12-18T16:53:13Z) - Progressive Depth Learning for Single Image Dehazing [56.71963910162241]
Existing dehazing methods often ignore the depth cues and fail in distant areas where heavier haze disturbs the visibility.
We propose a deep end-to-end model that iteratively estimates image depths and transmission maps.
Our approach benefits from explicitly modeling the inner relationship of image depth and transmission map, which is especially effective for distant hazy areas.
arXiv Detail & Related papers (2021-02-21T05:24:18Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z) - Deep Sea Robotic Imaging Simulator [6.2122699483618]
The largest portion of the ocean - the deep sea - still remains mostly unexplored.
Deep sea images are very different from the images taken in shallow waters and this area did not get much attention from the community.
This paper presents a physical model-based image simulation solution, which uses an in-air texture and depth information as inputs.
arXiv Detail & Related papers (2020-06-27T16:18:32Z) - L^2UWE: A Framework for the Efficient Enhancement of Low-Light
Underwater Images Using Local Contrast and Multi-Scale Fusion [84.11514688735183]
We present a novel single-image low-light underwater image enhancer, L2UWE, that builds on our observation that an efficient model of atmospheric lighting can be derived from local contrast information.
A multi-scale fusion process is employed to combine these images while emphasizing regions of higher luminance, saliency and local contrast.
We demonstrate the performance of L2UWE by using seven metrics to test it against seven state-of-the-art enhancement methods specific to underwater and low-light scenes.
arXiv Detail & Related papers (2020-05-28T01:57:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.