NoiSER: Noise is All You Need for Enhancing Low-Light Images Without
Task-Related Data
- URL: http://arxiv.org/abs/2211.04700v1
- Date: Wed, 9 Nov 2022 06:18:18 GMT
- Title: NoiSER: Noise is All You Need for Enhancing Low-Light Images Without
Task-Related Data
- Authors: Zhao Zhang, Suiyi Zhao, Xiaojie Jin, Mingliang Xu, Yi Yang, Shuicheng
Yan
- Abstract summary: We show that it is possible to enhance a low-light image without any task-related training data.
Technically, we propose a new, magical, effective and efficient method, termed underlineNoise underlineSElf-underlineRegression (NoiSER)
Our NoiSER is highly competitive to current task-related data based LLIE models in terms of quantitative and visual results.
- Score: 103.04999391668753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper is about an extraordinary phenomenon. Suppose we don't use any
low-light images as training data, can we enhance a low-light image by deep
learning? Obviously, current methods cannot do this, since deep neural networks
require to train their scads of parameters using copious amounts of training
data, especially task-related data. In this paper, we show that in the context
of fundamental deep learning, it is possible to enhance a low-light image
without any task-related training data. Technically, we propose a new, magical,
effective and efficient method, termed \underline{Noi}se
\underline{SE}lf-\underline{R}egression (NoiSER), which learns a gray-world
mapping from Gaussian distribution for low-light image enhancement (LLIE).
Specifically, a self-regression model is built as a carrier to learn a
gray-world mapping during training, which is performed by simply iteratively
feeding random noise. During inference, a low-light image is directly fed into
the learned mapping to yield a normal-light one. Extensive experiments show
that our NoiSER is highly competitive to current task-related data based LLIE
models in terms of quantitative and visual results, while outperforming them in
terms of the number of parameters, training time and inference speed. With only
about 1K parameters, NoiSER realizes about 1 minute for training and 1.2 ms for
inference with 600$\times$400 resolution on RTX 2080 Ti. Besides, NoiSER has an
inborn automated exposure suppression capability and can automatically adjust
too bright or too dark, without additional manipulations.
Related papers
- RSEND: Retinex-based Squeeze and Excitation Network with Dark Region Detection for Efficient Low Light Image Enhancement [1.7356500114422735]
We propose a more accurate, concise, and one-stage Retinex theory based framework, RSEND.
RSEND first divides the low-light image into the illumination map and reflectance map, then captures the important details in the illumination map and performs light enhancement.
Our Efficient Retinex model significantly outperforms other CNN-based models, achieving a PSNR improvement ranging from 0.44 dB to 4.2 dB in different datasets.
arXiv Detail & Related papers (2024-06-14T01:36:52Z) - Zero-Reference Low-Light Enhancement via Physical Quadruple Priors [58.77377454210244]
We propose a new zero-reference low-light enhancement framework trainable solely with normal light images.
This framework is able to restore our illumination-invariant prior back to images, automatically achieving low-light enhancement.
arXiv Detail & Related papers (2024-03-19T17:36:28Z) - Peer is Your Pillar: A Data-unbalanced Conditional GANs for Few-shot
Image Generation [24.698516678703236]
Few-shot image generation aims to train generative models using a small number of training images.
We propose a novel pipeline called Peer is your Pillar (PIP), which combines a target few-shot dataset with a peer dataset to create a data-unbalanced conditional generation.
arXiv Detail & Related papers (2023-11-14T14:55:42Z) - How do Minimum-Norm Shallow Denoisers Look in Function Space? [36.14517933550934]
Neural network (NN) denoisers are an essential building block in many common tasks.
We characterize the functions realized by shallow ReLU NN denoisers with a minimal representation cost.
arXiv Detail & Related papers (2023-11-12T06:20:21Z) - A Semi-Paired Approach For Label-to-Image Translation [6.888253564585197]
We introduce the first semi-supervised (semi-paired) framework for label-to-image translation.
In the semi-paired setting, the model has access to a small set of paired data and a larger set of unpaired images and labels.
We propose a training algorithm for this shared network, and we present a rare classes sampling algorithm to focus on under-represented classes.
arXiv Detail & Related papers (2023-06-23T16:13:43Z) - LVRNet: Lightweight Image Restoration for Aerial Images under Low
Visibility [6.785107765806355]
Low visibility conditions cause by high pollution/smoke, poor air quality index, low light, atmospheric scattering, and haze during a blizzard become more important to prevent accidents.
It is crucial to form a solution that can result in a high-quality image and is efficient enough to be deployed for everyday use.
We introduce a lightweight deep learning model called Low-Visibility Restoration Network (LVRNet)
It outperforms previous image restoration methods with low latency, achieving a PSNR value of 25.744 and an SSIM of 0.905, making our approach scalable and ready for practical use.
arXiv Detail & Related papers (2023-01-13T08:43:11Z) - EfficientTrain: Exploring Generalized Curriculum Learning for Training
Visual Backbones [80.662250618795]
This paper presents a new curriculum learning approach for the efficient training of visual backbones (e.g., vision Transformers)
As an off-the-shelf method, it reduces the wall-time training cost of a wide variety of popular models by >1.5x on ImageNet-1K/22K without sacrificing accuracy.
arXiv Detail & Related papers (2022-11-17T17:38:55Z) - Enhance the Visual Representation via Discrete Adversarial Training [24.3040211834614]
Adversarial Training (AT) is commonly accepted as one of the most effective approaches defending against adversarial examples.
We propose Discrete Adversarial Training ( DAT) to reform the image data to discrete text-like inputs, i.e. visual words.
As a plug-and-play technique for enhancing the visual representation, DAT achieves significant improvement on multiple tasks.
arXiv Detail & Related papers (2022-09-16T06:25:06Z) - When the Sun Goes Down: Repairing Photometric Losses for All-Day Depth
Estimation [47.617222712429026]
We show how to use a combination of three techniques to allow the existing photometric losses to work for both day and nighttime images.
First, we introduce a per-pixel neural intensity transformation to compensate for the light changes that occur between successive frames.
Second, we predict a per-pixel residual flow map that we use to correct the reprojection correspondences induced by the estimated ego-motion and depth.
arXiv Detail & Related papers (2022-06-28T09:29:55Z) - Pushing the Limits of Simple Pipelines for Few-Shot Learning: External
Data and Fine-Tuning Make a Difference [74.80730361332711]
Few-shot learning is an important and topical problem in computer vision.
We show that a simple transformer-based pipeline yields surprisingly good performance on standard benchmarks.
arXiv Detail & Related papers (2022-04-15T02:55:58Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Pure Noise to the Rescue of Insufficient Data: Improving Imbalanced
Classification by Training on Random Noise Images [12.91269560135337]
We present a surprisingly simple yet highly effective method to mitigate this limitation.
Unlike the common use of additive noise or adversarial noise for data augmentation, we propose directly training on pure random noise images.
We present a new Distribution-Aware Routing Batch Normalization layer (DAR-BN), which enables training on pure noise images in addition to natural images within the same network.
arXiv Detail & Related papers (2021-12-16T11:51:35Z) - Learning to Enhance Low-Light Image via Zero-Reference Deep Curve
Estimation [91.93949787122818]
We present Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network.
Our method trains a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order curves for dynamic range adjustment of a given image.
We present an accelerated and light version of Zero-DCE, called Zero-DCE++, that takes advantage of a tiny network with just 10K parameters.
arXiv Detail & Related papers (2021-03-01T09:21:51Z) - DeFlow: Learning Complex Image Degradations from Unpaired Data with
Conditional Flows [145.83812019515818]
We propose DeFlow, a method for learning image degradations from unpaired data.
We model the degradation process in the latent space of a shared flow-decoder network.
We validate our DeFlow formulation on the task of joint image restoration and super-resolution.
arXiv Detail & Related papers (2021-01-14T18:58:01Z) - Syn2Real Transfer Learning for Image Deraining using Gaussian Processes [92.15895515035795]
CNN-based methods for image deraining have achieved excellent performance in terms of reconstruction error as well as visual quality.
Due to challenges in obtaining real world fully-labeled image deraining datasets, existing methods are trained only on synthetically generated data.
We propose a Gaussian Process-based semi-supervised learning framework which enables the network in learning to derain using synthetic dataset.
arXiv Detail & Related papers (2020-06-10T00:33:18Z) - Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement [156.18634427704583]
The paper presents a novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network.
Our method trains a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order curves for dynamic range adjustment of a given image.
arXiv Detail & Related papers (2020-01-19T13:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.