NoiSER: Noise is All You Need for Enhancing Low-Light Images Without
Task-Related Data
- URL: http://arxiv.org/abs/2211.04700v1
- Date: Wed, 9 Nov 2022 06:18:18 GMT
- Title: NoiSER: Noise is All You Need for Enhancing Low-Light Images Without
Task-Related Data
- Authors: Zhao Zhang, Suiyi Zhao, Xiaojie Jin, Mingliang Xu, Yi Yang, Shuicheng
Yan
- Abstract summary: We show that it is possible to enhance a low-light image without any task-related training data.
Technically, we propose a new, magical, effective and efficient method, termed underlineNoise underlineSElf-underlineRegression (NoiSER)
Our NoiSER is highly competitive to current task-related data based LLIE models in terms of quantitative and visual results.
- Score: 103.04999391668753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper is about an extraordinary phenomenon. Suppose we don't use any
low-light images as training data, can we enhance a low-light image by deep
learning? Obviously, current methods cannot do this, since deep neural networks
require to train their scads of parameters using copious amounts of training
data, especially task-related data. In this paper, we show that in the context
of fundamental deep learning, it is possible to enhance a low-light image
without any task-related training data. Technically, we propose a new, magical,
effective and efficient method, termed \underline{Noi}se
\underline{SE}lf-\underline{R}egression (NoiSER), which learns a gray-world
mapping from Gaussian distribution for low-light image enhancement (LLIE).
Specifically, a self-regression model is built as a carrier to learn a
gray-world mapping during training, which is performed by simply iteratively
feeding random noise. During inference, a low-light image is directly fed into
the learned mapping to yield a normal-light one. Extensive experiments show
that our NoiSER is highly competitive to current task-related data based LLIE
models in terms of quantitative and visual results, while outperforming them in
terms of the number of parameters, training time and inference speed. With only
about 1K parameters, NoiSER realizes about 1 minute for training and 1.2 ms for
inference with 600$\times$400 resolution on RTX 2080 Ti. Besides, NoiSER has an
inborn automated exposure suppression capability and can automatically adjust
too bright or too dark, without additional manipulations.
Related papers
- Peer is Your Pillar: A Data-unbalanced Conditional GANs for Few-shot
Image Generation [24.698516678703236]
Few-shot image generation aims to train generative models using a small number of training images.
We propose a novel pipeline called Peer is your Pillar (PIP), which combines a target few-shot dataset with a peer dataset to create a data-unbalanced conditional generation.
arXiv Detail & Related papers (2023-11-14T14:55:42Z) - How do Minimum-Norm Shallow Denoisers Look in Function Space? [36.14517933550934]
Neural network (NN) denoisers are an essential building block in many common tasks.
We characterize the functions realized by shallow ReLU NN denoisers with a minimal representation cost.
arXiv Detail & Related papers (2023-11-12T06:20:21Z) - A Semi-Paired Approach For Label-to-Image Translation [6.888253564585197]
We introduce the first semi-supervised (semi-paired) framework for label-to-image translation.
In the semi-paired setting, the model has access to a small set of paired data and a larger set of unpaired images and labels.
We propose a training algorithm for this shared network, and we present a rare classes sampling algorithm to focus on under-represented classes.
arXiv Detail & Related papers (2023-06-23T16:13:43Z) - EfficientTrain: Exploring Generalized Curriculum Learning for Training
Visual Backbones [80.662250618795]
This paper presents a new curriculum learning approach for the efficient training of visual backbones (e.g., vision Transformers)
As an off-the-shelf method, it reduces the wall-time training cost of a wide variety of popular models by >1.5x on ImageNet-1K/22K without sacrificing accuracy.
arXiv Detail & Related papers (2022-11-17T17:38:55Z) - Enhance the Visual Representation via Discrete Adversarial Training [24.3040211834614]
Adversarial Training (AT) is commonly accepted as one of the most effective approaches defending against adversarial examples.
We propose Discrete Adversarial Training ( DAT) to reform the image data to discrete text-like inputs, i.e. visual words.
As a plug-and-play technique for enhancing the visual representation, DAT achieves significant improvement on multiple tasks.
arXiv Detail & Related papers (2022-09-16T06:25:06Z) - Pushing the Limits of Simple Pipelines for Few-Shot Learning: External
Data and Fine-Tuning Make a Difference [74.80730361332711]
Few-shot learning is an important and topical problem in computer vision.
We show that a simple transformer-based pipeline yields surprisingly good performance on standard benchmarks.
arXiv Detail & Related papers (2022-04-15T02:55:58Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Pure Noise to the Rescue of Insufficient Data: Improving Imbalanced
Classification by Training on Random Noise Images [12.91269560135337]
We present a surprisingly simple yet highly effective method to mitigate this limitation.
Unlike the common use of additive noise or adversarial noise for data augmentation, we propose directly training on pure random noise images.
We present a new Distribution-Aware Routing Batch Normalization layer (DAR-BN), which enables training on pure noise images in addition to natural images within the same network.
arXiv Detail & Related papers (2021-12-16T11:51:35Z) - DeFlow: Learning Complex Image Degradations from Unpaired Data with
Conditional Flows [145.83812019515818]
We propose DeFlow, a method for learning image degradations from unpaired data.
We model the degradation process in the latent space of a shared flow-decoder network.
We validate our DeFlow formulation on the task of joint image restoration and super-resolution.
arXiv Detail & Related papers (2021-01-14T18:58:01Z) - Syn2Real Transfer Learning for Image Deraining using Gaussian Processes [92.15895515035795]
CNN-based methods for image deraining have achieved excellent performance in terms of reconstruction error as well as visual quality.
Due to challenges in obtaining real world fully-labeled image deraining datasets, existing methods are trained only on synthetically generated data.
We propose a Gaussian Process-based semi-supervised learning framework which enables the network in learning to derain using synthetic dataset.
arXiv Detail & Related papers (2020-06-10T00:33:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.