Robustness Enhancement in Neural Networks with Alpha-Stable Training
Noise
- URL: http://arxiv.org/abs/2311.10803v1
- Date: Fri, 17 Nov 2023 10:00:47 GMT
- Title: Robustness Enhancement in Neural Networks with Alpha-Stable Training
Noise
- Authors: Xueqiong Yuan, Jipeng Li, Ercan Engin Kuruo\u{g}lu
- Abstract summary: We explore the possibility of stronger robustness for non-Gaussian impulsive noise, specifically alpha-stable noise.
By comparing the testing accuracy of models trained with Gaussian noise and alpha-stable noise on data corrupted by different noise, we find that training with alpha-stable noise is more effective than Gaussian noise.
We propose a novel data augmentation method that replaces Gaussian noise, which is typically added to the training data, with alpha-stable noise.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing use of deep learning on data collected by non-perfect
sensors and in non-perfect environments, the robustness of deep learning
systems has become an important issue. A common approach for obtaining
robustness to noise has been to train deep learning systems with data augmented
with Gaussian noise. In this work, we challenge the common choice of Gaussian
noise and explore the possibility of stronger robustness for non-Gaussian
impulsive noise, specifically alpha-stable noise. Justified by the Generalized
Central Limit Theorem and evidenced by observations in various application
areas, alpha-stable noise is widely present in nature. By comparing the testing
accuracy of models trained with Gaussian noise and alpha-stable noise on data
corrupted by different noise, we find that training with alpha-stable noise is
more effective than Gaussian noise, especially when the dataset is corrupted by
impulsive noise, thus improving the robustness of the model. The generality of
this conclusion is validated through experiments conducted on various deep
learning models with image and time series datasets, and other benchmark
corrupted datasets. Consequently, we propose a novel data augmentation method
that replaces Gaussian noise, which is typically added to the training data,
with alpha-stable noise.
Related papers
- NoiseBench: Benchmarking the Impact of Real Label Noise on Named Entity Recognition [3.726602636064681]
We present an analysis that shows that real noise is significantly more challenging than simulated noise.
We show that current state-of-the-art models for noise-robust learning fall far short of their theoretically achievable upper bound.
arXiv Detail & Related papers (2024-05-13T10:20:31Z) - SoftPatch: Unsupervised Anomaly Detection with Noisy Data [67.38948127630644]
This paper considers label-level noise in image sensory anomaly detection for the first time.
We propose a memory-based unsupervised AD method, SoftPatch, which efficiently denoises the data at the patch level.
Compared with existing methods, SoftPatch maintains a strong modeling ability of normal data and alleviates the overconfidence problem in coreset.
arXiv Detail & Related papers (2024-03-21T08:49:34Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Optimizing the Noise in Self-Supervised Learning: from Importance
Sampling to Noise-Contrastive Estimation [80.07065346699005]
It is widely assumed that the optimal noise distribution should be made equal to the data distribution, as in Generative Adversarial Networks (GANs)
We turn to Noise-Contrastive Estimation which grounds this self-supervised task as an estimation problem of an energy-based model of the data.
We soberly conclude that the optimal noise may be hard to sample from, and the gain in efficiency can be modest compared to choosing the noise distribution equal to the data's.
arXiv Detail & Related papers (2023-01-23T19:57:58Z) - CFNet: Conditional Filter Learning with Dynamic Noise Estimation for
Real Image Denoising [37.29552796977652]
This paper considers real noise approximated by heteroscedastic Gaussian/Poisson Gaussian distributions with in-camera signal processing pipelines.
We propose a novel conditional filter in which the optimal kernels for different feature positions can be adaptively inferred by local features from the image and the noise map.
Also, we bring the thought that alternatively performs noise estimation and non-blind denoising into CNN structure, which continuously updates noise prior to guide the iterative feature denoising.
arXiv Detail & Related papers (2022-11-26T14:28:54Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - Rethinking Noise Synthesis and Modeling in Raw Denoising [75.55136662685341]
We introduce a new perspective to synthesize noise by directly sampling from the sensor's real noise.
It inherently generates accurate raw image noise for different camera sensors.
arXiv Detail & Related papers (2021-10-10T10:45:24Z) - Removing Noise from Extracellular Neural Recordings Using Fully
Convolutional Denoising Autoencoders [62.997667081978825]
We propose a Fully Convolutional Denoising Autoencoder, which learns to produce a clean neuronal activity signal from a noisy multichannel input.
The experimental results on simulated data show that our proposed method can improve significantly the quality of noise-corrupted neural signals.
arXiv Detail & Related papers (2021-09-18T14:51:24Z) - Determining the origin of impulsive noise events using paired wireless
sound sensors [0.0]
This work investigates how to identify the source of impulsive noise events using a pair of wireless noise sensors.
One sensor is placed at a known noise source, and another sensor is placed at the noise receiver.
To avoid privacy issues, the approach uses on-edge preprocessing that converts the sound into privacy compatible spectrograms.
arXiv Detail & Related papers (2021-08-23T14:19:42Z) - Denoising Distantly Supervised Named Entity Recognition via a
Hypergeometric Probabilistic Model [26.76830553508229]
Hypergeometric Learning (HGL) is a denoising algorithm for distantly supervised named entity recognition.
HGL takes both noise distribution and instance-level confidence into consideration.
Experiments show that HGL can effectively denoise the weakly-labeled data retrieved from distant supervision.
arXiv Detail & Related papers (2021-06-17T04:01:25Z) - Dynamic Layer Customization for Noise Robust Speech Emotion Recognition
in Heterogeneous Condition Training [16.807298318504156]
We show that we can improve performance by dynamically routing samples to specialized feature encoders for each noise condition.
We extend these improvements to the multimodal setting by dynamically routing samples to maintain temporal ordering.
arXiv Detail & Related papers (2020-10-21T18:07:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.