Spectrum-inspired Low-light Image Translation for Saliency Detection
- URL: http://arxiv.org/abs/2303.10145v1
- Date: Fri, 17 Mar 2023 17:30:42 GMT
- Title: Spectrum-inspired Low-light Image Translation for Saliency Detection
- Authors: Kitty Varghese, Sudarshan Rajagopalan, Mohit Lamba, Kaushik Mitra
- Abstract summary: We propose a technique that transforms well-lit images to low-light images and use them as a proxy for real low-light images.
Unlike popular deep learning approaches which require learning thousands of parameters and enormous amounts of training data, the proposed transformation is fast and simple and easy to extend to other tasks such as low-light depth estimation.
Our experiments show that the state-of-the-art saliency detection and depth estimation networks trained on our proxy low-light images perform significantly better on real low-light images than networks trained using existing strategies.
- Score: 23.368690302292563
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Saliency detection methods are central to several real-world applications
such as robot navigation and satellite imagery. However, the performance of
existing methods deteriorate under low-light conditions because training
datasets mostly comprise of well-lit images. One possible solution is to
collect a new dataset for low-light conditions. This involves pixel-level
annotations, which is not only tedious and time-consuming but also infeasible
if a huge training corpus is required. We propose a technique that performs
classical band-pass filtering in the Fourier space to transform well-lit images
to low-light images and use them as a proxy for real low-light images. Unlike
popular deep learning approaches which require learning thousands of parameters
and enormous amounts of training data, the proposed transformation is fast and
simple and easy to extend to other tasks such as low-light depth estimation.
Our experiments show that the state-of-the-art saliency detection and depth
estimation networks trained on our proxy low-light images perform significantly
better on real low-light images than networks trained using existing
strategies.
Related papers
- Simplifying Low-Light Image Enhancement Networks with Relative Loss
Functions [14.63586364951471]
We introduce FLW-Net (Fast and LightWeight Network) and two relative loss functions to make learning easier in low-light image enhancement.
We first recognize the challenges of the need for a large receptive field to obtain global contrast.
Then, we propose an efficient global feature information extraction component and two loss functions based on relative information to overcome these challenges.
arXiv Detail & Related papers (2023-04-06T10:05:54Z) - Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and
Transformer-Based Method [51.30748775681917]
We consider the task of low-light image enhancement (LLIE) and introduce a large-scale database consisting of images at 4K and 8K resolution.
We conduct systematic benchmarking studies and provide a comparison of current LLIE algorithms.
As a second contribution, we introduce LLFormer, a transformer-based low-light enhancement method.
arXiv Detail & Related papers (2022-12-22T09:05:07Z) - NoiSER: Noise is All You Need for Enhancing Low-Light Images Without
Task-Related Data [103.04999391668753]
We show that it is possible to enhance a low-light image without any task-related training data.
Technically, we propose a new, magical, effective and efficient method, termed underlineNoise underlineSElf-underlineRegression (NoiSER)
Our NoiSER is highly competitive to current task-related data based LLIE models in terms of quantitative and visual results.
arXiv Detail & Related papers (2022-11-09T06:18:18Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Towards Geometry Guided Neural Relighting with Flash Photography [26.511476565209026]
We propose a framework for image relighting from a single flash photograph with its corresponding depth map using deep learning.
We experimentally validate the advantage of our geometry guided approach over state-of-the-art image-based approaches in intrinsic image decomposition and image relighting.
arXiv Detail & Related papers (2020-08-12T08:03:28Z) - Deep Bilateral Retinex for Low-Light Image Enhancement [96.15991198417552]
Low-light images suffer from poor visibility caused by low contrast, color distortion and measurement noise.
This paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise.
The proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.
arXiv Detail & Related papers (2020-07-04T06:26:44Z) - Syn2Real Transfer Learning for Image Deraining using Gaussian Processes [92.15895515035795]
CNN-based methods for image deraining have achieved excellent performance in terms of reconstruction error as well as visual quality.
Due to challenges in obtaining real world fully-labeled image deraining datasets, existing methods are trained only on synthetically generated data.
We propose a Gaussian Process-based semi-supervised learning framework which enables the network in learning to derain using synthetic dataset.
arXiv Detail & Related papers (2020-06-10T00:33:18Z) - A Comparison of Few-Shot Learning Methods for Underwater Optical and
Sonar Image Classification [10.448481847860705]
Deep convolutional neural networks generally perform well in underwater object recognition tasks.
Few-Shot Learning efforts have produced many promising methods to deal with low data availability.
This paper is the first paper to evaluate and compare several supervised and semi-supervised Few-Shot Learning methods using underwater optical and side-scan sonar imagery.
arXiv Detail & Related papers (2020-05-10T10:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.