NoiSER: Noise is All You Need for Enhancing Low-Light Images Without
Task-Related Data
- URL: http://arxiv.org/abs/2211.04700v1
- Date: Wed, 9 Nov 2022 06:18:18 GMT
- Title: NoiSER: Noise is All You Need for Enhancing Low-Light Images Without
Task-Related Data
- Authors: Zhao Zhang, Suiyi Zhao, Xiaojie Jin, Mingliang Xu, Yi Yang, Shuicheng
Yan
- Abstract summary: We show that it is possible to enhance a low-light image without any task-related training data.
Technically, we propose a new, magical, effective and efficient method, termed underlineNoise underlineSElf-underlineRegression (NoiSER)
Our NoiSER is highly competitive to current task-related data based LLIE models in terms of quantitative and visual results.
- Score: 103.04999391668753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper is about an extraordinary phenomenon. Suppose we don't use any
low-light images as training data, can we enhance a low-light image by deep
learning? Obviously, current methods cannot do this, since deep neural networks
require to train their scads of parameters using copious amounts of training
data, especially task-related data. In this paper, we show that in the context
of fundamental deep learning, it is possible to enhance a low-light image
without any task-related training data. Technically, we propose a new, magical,
effective and efficient method, termed \underline{Noi}se
\underline{SE}lf-\underline{R}egression (NoiSER), which learns a gray-world
mapping from Gaussian distribution for low-light image enhancement (LLIE).
Specifically, a self-regression model is built as a carrier to learn a
gray-world mapping during training, which is performed by simply iteratively
feeding random noise. During inference, a low-light image is directly fed into
the learned mapping to yield a normal-light one. Extensive experiments show
that our NoiSER is highly competitive to current task-related data based LLIE
models in terms of quantitative and visual results, while outperforming them in
terms of the number of parameters, training time and inference speed. With only
about 1K parameters, NoiSER realizes about 1 minute for training and 1.2 ms for
inference with 600$\times$400 resolution on RTX 2080 Ti. Besides, NoiSER has an
inborn automated exposure suppression capability and can automatically adjust
too bright or too dark, without additional manipulations.
Related papers
- RSEND: Retinex-based Squeeze and Excitation Network with Dark Region Detection for Efficient Low Light Image Enhancement [1.7356500114422735]
We propose a more accurate, concise, and one-stage Retinex theory based framework, RSEND.
RSEND first divides the low-light image into the illumination map and reflectance map, then captures the important details in the illumination map and performs light enhancement.
Our Efficient Retinex model significantly outperforms other CNN-based models, achieving a PSNR improvement ranging from 0.44 dB to 4.2 dB in different datasets.
arXiv Detail & Related papers (2024-06-14T01:36:52Z) - Zero-Reference Low-Light Enhancement via Physical Quadruple Priors [58.77377454210244]
We propose a new zero-reference low-light enhancement framework trainable solely with normal light images.
This framework is able to restore our illumination-invariant prior back to images, automatically achieving low-light enhancement.
arXiv Detail & Related papers (2024-03-19T17:36:28Z) - LVRNet: Lightweight Image Restoration for Aerial Images under Low
Visibility [6.785107765806355]
Low visibility conditions cause by high pollution/smoke, poor air quality index, low light, atmospheric scattering, and haze during a blizzard become more important to prevent accidents.
It is crucial to form a solution that can result in a high-quality image and is efficient enough to be deployed for everyday use.
We introduce a lightweight deep learning model called Low-Visibility Restoration Network (LVRNet)
It outperforms previous image restoration methods with low latency, achieving a PSNR value of 25.744 and an SSIM of 0.905, making our approach scalable and ready for practical use.
arXiv Detail & Related papers (2023-01-13T08:43:11Z) - EfficientTrain: Exploring Generalized Curriculum Learning for Training
Visual Backbones [80.662250618795]
This paper presents a new curriculum learning approach for the efficient training of visual backbones (e.g., vision Transformers)
As an off-the-shelf method, it reduces the wall-time training cost of a wide variety of popular models by >1.5x on ImageNet-1K/22K without sacrificing accuracy.
arXiv Detail & Related papers (2022-11-17T17:38:55Z) - When the Sun Goes Down: Repairing Photometric Losses for All-Day Depth
Estimation [47.617222712429026]
We show how to use a combination of three techniques to allow the existing photometric losses to work for both day and nighttime images.
First, we introduce a per-pixel neural intensity transformation to compensate for the light changes that occur between successive frames.
Second, we predict a per-pixel residual flow map that we use to correct the reprojection correspondences induced by the estimated ego-motion and depth.
arXiv Detail & Related papers (2022-06-28T09:29:55Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Pure Noise to the Rescue of Insufficient Data: Improving Imbalanced
Classification by Training on Random Noise Images [12.91269560135337]
We present a surprisingly simple yet highly effective method to mitigate this limitation.
Unlike the common use of additive noise or adversarial noise for data augmentation, we propose directly training on pure random noise images.
We present a new Distribution-Aware Routing Batch Normalization layer (DAR-BN), which enables training on pure noise images in addition to natural images within the same network.
arXiv Detail & Related papers (2021-12-16T11:51:35Z) - Learning to Enhance Low-Light Image via Zero-Reference Deep Curve
Estimation [91.93949787122818]
We present Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network.
Our method trains a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order curves for dynamic range adjustment of a given image.
We present an accelerated and light version of Zero-DCE, called Zero-DCE++, that takes advantage of a tiny network with just 10K parameters.
arXiv Detail & Related papers (2021-03-01T09:21:51Z) - Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement [156.18634427704583]
The paper presents a novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network.
Our method trains a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order curves for dynamic range adjustment of a given image.
arXiv Detail & Related papers (2020-01-19T13:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.