Noise Self-Regression: A New Learning Paradigm to Enhance Low-Light Images Without Task-Related Data
- URL: http://arxiv.org/abs/2211.04700v3
- Date: Fri, 06 Dec 2024 09:46:09 GMT
- Title: Noise Self-Regression: A New Learning Paradigm to Enhance Low-Light Images Without Task-Related Data
- Authors: Zhao Zhang, Suiyi Zhao, Xiaojie Jin, Mingliang Xu, Yi Yang, Shuicheng Yan, Meng Wang,
- Abstract summary: We propose Noise SElf-Regression (NoiSER) without access to any task-related data.<n>NoiSER is highly competitive in enhancement quality, yet with a much smaller model size, and much lower training and inference cost.
- Score: 86.68013790656762
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based low-light image enhancement (LLIE) is a task of leveraging deep neural networks to enhance the image illumination while keeping the image content unchanged. From the perspective of training data, existing methods complete the LLIE task driven by one of the following three data types: paired data, unpaired data and zero-reference data. Each type of these data-driven methods has its own advantages, e.g., zero-reference data-based methods have very low requirements on training data and can meet the human needs in many scenarios. In this paper, we leverage pure Gaussian noise to complete the LLIE task, which further reduces the requirements for training data in LLIE tasks and can be used as another alternative in practical use. Specifically, we propose Noise SElf-Regression (NoiSER) without access to any task-related data, simply learns a convolutional neural network equipped with an instance-normalization layer by taking a random noise image, $\mathcal{N}(0,\sigma^2)$ for each pixel, as both input and output for each training pair, and then the low-light image is fed to the trained network for predicting the normal-light image. Technically, an intuitive explanation for its effectiveness is as follows: 1) the self-regression reconstructs the contrast between adjacent pixels of the input image, 2) the instance-normalization layer may naturally remediate the overall magnitude/lighting of the input image, and 3) the $\mathcal{N}(0,\sigma^2)$ assumption for each pixel enforces the output image to follow the well-known gray-world hypothesis when the image size is big enough. Compared to current state-of-the-art LLIE methods with access to different task-related data, NoiSER is highly competitive in enhancement quality, yet with a much smaller model size, and much lower training and inference cost. Besides, NoiSER also excels in mitigating overexposure and handling joint tasks.
Related papers
- RSEND: Retinex-based Squeeze and Excitation Network with Dark Region Detection for Efficient Low Light Image Enhancement [1.7356500114422735]
We propose a more accurate, concise, and one-stage Retinex theory based framework, RSEND.
RSEND first divides the low-light image into the illumination map and reflectance map, then captures the important details in the illumination map and performs light enhancement.
Our Efficient Retinex model significantly outperforms other CNN-based models, achieving a PSNR improvement ranging from 0.44 dB to 4.2 dB in different datasets.
arXiv Detail & Related papers (2024-06-14T01:36:52Z) - Zero-Reference Low-Light Enhancement via Physical Quadruple Priors [58.77377454210244]
We propose a new zero-reference low-light enhancement framework trainable solely with normal light images.
This framework is able to restore our illumination-invariant prior back to images, automatically achieving low-light enhancement.
arXiv Detail & Related papers (2024-03-19T17:36:28Z) - Peer is Your Pillar: A Data-unbalanced Conditional GANs for Few-shot
Image Generation [24.698516678703236]
Few-shot image generation aims to train generative models using a small number of training images.
We propose a novel pipeline called Peer is your Pillar (PIP), which combines a target few-shot dataset with a peer dataset to create a data-unbalanced conditional generation.
arXiv Detail & Related papers (2023-11-14T14:55:42Z) - How do Minimum-Norm Shallow Denoisers Look in Function Space? [36.14517933550934]
Neural network (NN) denoisers are an essential building block in many common tasks.
We characterize the functions realized by shallow ReLU NN denoisers with a minimal representation cost.
arXiv Detail & Related papers (2023-11-12T06:20:21Z) - A Semi-Paired Approach For Label-to-Image Translation [6.888253564585197]
We introduce the first semi-supervised (semi-paired) framework for label-to-image translation.
In the semi-paired setting, the model has access to a small set of paired data and a larger set of unpaired images and labels.
We propose a training algorithm for this shared network, and we present a rare classes sampling algorithm to focus on under-represented classes.
arXiv Detail & Related papers (2023-06-23T16:13:43Z) - LVRNet: Lightweight Image Restoration for Aerial Images under Low
Visibility [6.785107765806355]
Low visibility conditions cause by high pollution/smoke, poor air quality index, low light, atmospheric scattering, and haze during a blizzard become more important to prevent accidents.
It is crucial to form a solution that can result in a high-quality image and is efficient enough to be deployed for everyday use.
We introduce a lightweight deep learning model called Low-Visibility Restoration Network (LVRNet)
It outperforms previous image restoration methods with low latency, achieving a PSNR value of 25.744 and an SSIM of 0.905, making our approach scalable and ready for practical use.
arXiv Detail & Related papers (2023-01-13T08:43:11Z) - EfficientTrain: Exploring Generalized Curriculum Learning for Training
Visual Backbones [80.662250618795]
This paper presents a new curriculum learning approach for the efficient training of visual backbones (e.g., vision Transformers)
As an off-the-shelf method, it reduces the wall-time training cost of a wide variety of popular models by >1.5x on ImageNet-1K/22K without sacrificing accuracy.
arXiv Detail & Related papers (2022-11-17T17:38:55Z) - Enhance the Visual Representation via Discrete Adversarial Training [24.3040211834614]
Adversarial Training (AT) is commonly accepted as one of the most effective approaches defending against adversarial examples.
We propose Discrete Adversarial Training ( DAT) to reform the image data to discrete text-like inputs, i.e. visual words.
As a plug-and-play technique for enhancing the visual representation, DAT achieves significant improvement on multiple tasks.
arXiv Detail & Related papers (2022-09-16T06:25:06Z) - When the Sun Goes Down: Repairing Photometric Losses for All-Day Depth
Estimation [47.617222712429026]
We show how to use a combination of three techniques to allow the existing photometric losses to work for both day and nighttime images.
First, we introduce a per-pixel neural intensity transformation to compensate for the light changes that occur between successive frames.
Second, we predict a per-pixel residual flow map that we use to correct the reprojection correspondences induced by the estimated ego-motion and depth.
arXiv Detail & Related papers (2022-06-28T09:29:55Z) - Pushing the Limits of Simple Pipelines for Few-Shot Learning: External
Data and Fine-Tuning Make a Difference [74.80730361332711]
Few-shot learning is an important and topical problem in computer vision.
We show that a simple transformer-based pipeline yields surprisingly good performance on standard benchmarks.
arXiv Detail & Related papers (2022-04-15T02:55:58Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Pure Noise to the Rescue of Insufficient Data: Improving Imbalanced
Classification by Training on Random Noise Images [12.91269560135337]
We present a surprisingly simple yet highly effective method to mitigate this limitation.
Unlike the common use of additive noise or adversarial noise for data augmentation, we propose directly training on pure random noise images.
We present a new Distribution-Aware Routing Batch Normalization layer (DAR-BN), which enables training on pure noise images in addition to natural images within the same network.
arXiv Detail & Related papers (2021-12-16T11:51:35Z) - Learning to Enhance Low-Light Image via Zero-Reference Deep Curve
Estimation [91.93949787122818]
We present Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network.
Our method trains a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order curves for dynamic range adjustment of a given image.
We present an accelerated and light version of Zero-DCE, called Zero-DCE++, that takes advantage of a tiny network with just 10K parameters.
arXiv Detail & Related papers (2021-03-01T09:21:51Z) - DeFlow: Learning Complex Image Degradations from Unpaired Data with
Conditional Flows [145.83812019515818]
We propose DeFlow, a method for learning image degradations from unpaired data.
We model the degradation process in the latent space of a shared flow-decoder network.
We validate our DeFlow formulation on the task of joint image restoration and super-resolution.
arXiv Detail & Related papers (2021-01-14T18:58:01Z) - Syn2Real Transfer Learning for Image Deraining using Gaussian Processes [92.15895515035795]
CNN-based methods for image deraining have achieved excellent performance in terms of reconstruction error as well as visual quality.
Due to challenges in obtaining real world fully-labeled image deraining datasets, existing methods are trained only on synthetically generated data.
We propose a Gaussian Process-based semi-supervised learning framework which enables the network in learning to derain using synthetic dataset.
arXiv Detail & Related papers (2020-06-10T00:33:18Z) - Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement [156.18634427704583]
The paper presents a novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network.
Our method trains a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order curves for dynamic range adjustment of a given image.
arXiv Detail & Related papers (2020-01-19T13:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.