Pure Noise to the Rescue of Insufficient Data: Improving Imbalanced
Classification by Training on Random Noise Images
- URL: http://arxiv.org/abs/2112.08810v1
- Date: Thu, 16 Dec 2021 11:51:35 GMT
- Title: Pure Noise to the Rescue of Insufficient Data: Improving Imbalanced
Classification by Training on Random Noise Images
- Authors: Shiran Zada, Itay Benou and Michal Irani
- Abstract summary: We present a surprisingly simple yet highly effective method to mitigate this limitation.
Unlike the common use of additive noise or adversarial noise for data augmentation, we propose directly training on pure random noise images.
We present a new Distribution-Aware Routing Batch Normalization layer (DAR-BN), which enables training on pure noise images in addition to natural images within the same network.
- Score: 12.91269560135337
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite remarkable progress on visual recognition tasks, deep neural-nets
still struggle to generalize well when training data is scarce or highly
imbalanced, rendering them extremely vulnerable to real-world examples. In this
paper, we present a surprisingly simple yet highly effective method to mitigate
this limitation: using pure noise images as additional training data. Unlike
the common use of additive noise or adversarial noise for data augmentation, we
propose an entirely different perspective by directly training on pure random
noise images. We present a new Distribution-Aware Routing Batch Normalization
layer (DAR-BN), which enables training on pure noise images in addition to
natural images within the same network. This encourages generalization and
suppresses overfitting. Our proposed method significantly improves imbalanced
classification performance, obtaining state-of-the-art results on a large
variety of long-tailed image classification datasets (CIFAR-10-LT,
CIFAR-100-LT, ImageNet-LT, Places-LT, and CelebA-5). Furthermore, our method is
extremely simple and easy to use as a general new augmentation tool (on top of
existing augmentations), and can be incorporated in any training scheme. It
does not require any specialized data generation or training procedures, thus
keeping training fast and efficient
Related papers
- Self-Calibrated Variance-Stabilizing Transformations for Real-World Image Denoising [19.08732222562782]
Supervised deep learning has become the method of choice for image denoising.
We show that, contrary to popular belief, denoising networks specialized in the removal of Gaussian noise can be efficiently leveraged in favor of real-world image denoising.
arXiv Detail & Related papers (2024-07-24T16:23:46Z) - EfficientTrain++: Generalized Curriculum Learning for Efficient Visual Backbone Training [79.96741042766524]
We reformulate the training curriculum as a soft-selection function.
We show that exposing the contents of natural images can be readily achieved by the intensity of data augmentation.
The resulting method, EfficientTrain++, is simple, general, yet surprisingly effective.
arXiv Detail & Related papers (2024-05-14T17:00:43Z) - Masked Image Training for Generalizable Deep Image Denoising [53.03126421917465]
We present a novel approach to enhance the generalization performance of denoising networks.
Our method involves masking random pixels of the input image and reconstructing the missing information during training.
Our approach exhibits better generalization ability than other deep learning models and is directly applicable to real-world scenarios.
arXiv Detail & Related papers (2023-03-23T09:33:44Z) - Linear Combinations of Patches are Unreasonably Effective for Single-Image Denoising [5.893124686141782]
Deep neural networks have revolutionized image denoising in achieving significant accuracy improvements.
To alleviate the requirement to learn image priors externally, single-image methods perform denoising solely based on the analysis of the input noisy image.
This work investigates the effectiveness of linear combinations of patches for denoising under this constraint.
arXiv Detail & Related papers (2022-12-01T10:52:03Z) - NoiSER: Noise is All You Need for Enhancing Low-Light Images Without
Task-Related Data [103.04999391668753]
We show that it is possible to enhance a low-light image without any task-related training data.
Technically, we propose a new, magical, effective and efficient method, termed underlineNoise underlineSElf-underlineRegression (NoiSER)
Our NoiSER is highly competitive to current task-related data based LLIE models in terms of quantitative and visual results.
arXiv Detail & Related papers (2022-11-09T06:18:18Z) - Enhancing convolutional neural network generalizability via low-rank weight approximation [6.763245393373041]
Sufficient denoising is often an important first step for image processing.
Deep neural networks (DNNs) have been widely used for image denoising.
We introduce a new self-supervised framework for image denoising based on the Tucker low-rank tensor approximation.
arXiv Detail & Related papers (2022-09-26T14:11:05Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Fidelity Estimation Improves Noisy-Image Classification with Pretrained
Networks [12.814135905559992]
We propose a method that can be applied on a pretrained classifier.
Our method exploits a fidelity map estimate that is fused into the internal representations of the feature extractor.
We show that when using our oracle fidelity map we even outperform the fully retrained methods, whether trained on noisy or restored images.
arXiv Detail & Related papers (2021-06-01T17:58:32Z) - Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then
Training It Toughly [114.81028176850404]
Training generative adversarial networks (GANs) with limited data generally results in deteriorated performance and collapsed models.
We decompose the data-hungry GAN training into two sequential sub-problems.
Such a coordinated framework enables us to focus on lower-complexity and more data-efficient sub-problems.
arXiv Detail & Related papers (2021-02-28T05:20:29Z) - Data-driven Meta-set Based Fine-Grained Visual Classification [61.083706396575295]
We propose a data-driven meta-set based approach to deal with noisy web images for fine-grained recognition.
Specifically, guided by a small amount of clean meta-set, we train a selection net in a meta-learning manner to distinguish in- and out-of-distribution noisy images.
arXiv Detail & Related papers (2020-08-06T03:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.