Robust Learning of Recurrent Neural Networks in Presence of Exogenous
Noise
- URL: http://arxiv.org/abs/2105.00996v2
- Date: Tue, 4 May 2021 15:08:48 GMT
- Title: Robust Learning of Recurrent Neural Networks in Presence of Exogenous
Noise
- Authors: Arash Amini, Guangyi Liu, Nader Motee
- Abstract summary: We propose a tractable robustness analysis for RNN models subject to input noise.
The robustness measure can be estimated efficiently using linearization techniques.
Our proposed methodology significantly improves robustness of recurrent neural networks.
- Score: 22.690064709532873
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recurrent Neural networks (RNN) have shown promising potential for learning
dynamics of sequential data. However, artificial neural networks are known to
exhibit poor robustness in presence of input noise, where the sequential
architecture of RNNs exacerbates the problem. In this paper, we will use ideas
from control and estimation theories to propose a tractable robustness analysis
for RNN models that are subject to input noise. The variance of the output of
the noisy system is adopted as a robustness measure to quantify the impact of
noise on learning. It is shown that the robustness measure can be estimated
efficiently using linearization techniques. Using these results, we proposed a
learning method to enhance robustness of a RNN with respect to exogenous
Gaussian noise with known statistics. Our extensive simulations on benchmark
problems reveal that our proposed methodology significantly improves robustness
of recurrent neural networks.
Related papers
- Hopfield-Enhanced Deep Neural Networks for Artifact-Resilient Brain
State Decoding [0.0]
We propose a two-stage computational framework combining Hopfield Networks for artifact data preprocessing with Conal Neural Networks (CNNs) for classification of brain states in rat neural recordings under different levels of anesthesia.
Performance across various levels of data compression and noise intensities showed that our framework can effectively mitigate artifacts, allowing the model to reach parity with the clean-data CNN at lower noise levels.
arXiv Detail & Related papers (2023-11-06T15:08:13Z) - Learning Provably Robust Estimators for Inverse Problems via Jittering [51.467236126126366]
We investigate whether jittering, a simple regularization technique, is effective for learning worst-case robust estimators for inverse problems.
We show that jittering significantly enhances the worst-case robustness, but can be suboptimal for inverse problems beyond denoising.
arXiv Detail & Related papers (2023-07-24T14:19:36Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Training neural networks with structured noise improves classification and generalization [0.0]
We show how adding structure to noisy training data can substantially improve the algorithm performance.
We also prove that the so-called Hebbian Unlearning rule coincides with the training-with-noise algorithm when noise is maximal.
arXiv Detail & Related papers (2023-02-26T22:10:23Z) - A Study of Deep CNN Model with Labeling Noise Based on Granular-ball
Computing [0.0]
Granular ball computing is an efficient, robust and scalable learning method.
In this paper, we pioneered a granular ball neural network algorithm model, which adopts the idea of multi-granular to filter label noise samples during model training.
arXiv Detail & Related papers (2022-07-17T13:58:46Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Image Denoising using Attention-Residual Convolutional Neural Networks [0.0]
We propose a new learning-based non-blind denoising technique named Attention Residual Convolutional Neural Network (ARCNN) and its extension to blind denoising named Flexible Attention Residual Convolutional Neural Network (FARCNN)
ARCNN achieved an overall average PSNR results of around 0.44dB and 0.96dB for Gaussian and Poisson denoising, respectively FARCNN presented very consistent results, even with slightly worsen performance compared to ARCNN.
arXiv Detail & Related papers (2021-01-19T16:37:57Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.