Robust Processing-In-Memory Neural Networks via Noise-Aware
Normalization
- URL: http://arxiv.org/abs/2007.03230v2
- Date: Tue, 24 Nov 2020 05:35:46 GMT
- Title: Robust Processing-In-Memory Neural Networks via Noise-Aware
Normalization
- Authors: Li-Huang Tsai, Shih-Chieh Chang, Yu-Ting Chen, Jia-Yu Pan, Wei Wei and
Da-Cheng Juan
- Abstract summary: PIM accelerators often suffer from intrinsic noise in the physical components.
We propose a noise-agnostic method to achieve robust neural network performance against any noise setting.
- Score: 26.270754571140735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Analog computing hardwares, such as Processing-in-memory (PIM) accelerators,
have gradually received more attention for accelerating the neural network
computations. However, PIM accelerators often suffer from intrinsic noise in
the physical components, making it challenging for neural network models to
achieve the same performance as on the digital hardware. Previous works in
mitigating intrinsic noise assumed the knowledge of the noise model, and
retraining the neural networks accordingly was required. In this paper, we
propose a noise-agnostic method to achieve robust neural network performance
against any noise setting. Our key observation is that the degradation of
performance is due to the distribution shifts in network activations, which are
caused by the noise. To properly track the shifts and calibrate the biased
distributions, we propose a "noise-aware" batch normalization layer, which is
able to align the distributions of the activations under variational noise
inherent in the analog environments. Our method is simple, easy to implement,
general to various noise settings, and does not need to retrain the models. We
conduct experiments on several tasks in computer vision, including
classification, object detection and semantic segmentation. The results
demonstrate the effectiveness of our method, achieving robust performance under
a wide range of noise settings, more reliable than existing methods. We believe
that our simple yet general method can facilitate the adoption of analog
computing devices for neural networks.
Related papers
- Impact of white noise in artificial neural networks trained for classification: performance and noise mitigation strategies [0.0]
We consider how additive and multiplicative Gaussian white noise on the neuronal level can affect the accuracy of the network.
We adapt several noise reduction techniques to the essential setting of classification tasks.
arXiv Detail & Related papers (2024-11-07T01:21:12Z) - sVAD: A Robust, Low-Power, and Light-Weight Voice Activity Detection
with Spiking Neural Networks [51.516451451719654]
Spiking Neural Networks (SNNs) are known to be biologically plausible and power-efficient.
This paper introduces a novel SNN-based Voice Activity Detection model, referred to as sVAD.
It provides effective auditory feature representation through SincNet and 1D convolution, and improves noise robustness with attention mechanisms.
arXiv Detail & Related papers (2024-03-09T02:55:44Z) - Degradation-Noise-Aware Deep Unfolding Transformer for Hyperspectral
Image Denoising [9.119226249676501]
Hyperspectral images (HSIs) are often quite noisy because of narrow band spectral filtering.
To reduce the noise in HSI data cubes, both model-driven and learning-based denoising algorithms have been proposed.
This paper proposes a Degradation-Noise-Aware Unfolding Network (DNA-Net) that addresses these issues.
arXiv Detail & Related papers (2023-05-06T13:28:20Z) - Walking Noise: On Layer-Specific Robustness of Neural Architectures against Noisy Computations and Associated Characteristic Learning Dynamics [1.5184189132709105]
We discuss the implications of additive, multiplicative and mixed noise for different classification tasks and model architectures.
We propose a methodology called Walking Noise which injects layer-specific noise to measure the robustness.
We conclude with a discussion of the use of this methodology in practice, among others, discussing its use for tailored multi-execution in noisy environments.
arXiv Detail & Related papers (2022-12-20T17:09:08Z) - Simple Pooling Front-ends For Efficient Audio Classification [56.59107110017436]
We show that eliminating the temporal redundancy in the input audio features could be an effective approach for efficient audio classification.
We propose a family of simple pooling front-ends (SimPFs) which use simple non-parametric pooling operations to reduce the redundant information.
SimPFs can achieve a reduction in more than half of the number of floating point operations for off-the-shelf audio neural networks.
arXiv Detail & Related papers (2022-10-03T14:00:41Z) - Multi-stage image denoising with the wavelet transform [125.2251438120701]
Deep convolutional neural networks (CNNs) are used for image denoising via automatically mining accurate structure information.
We propose a multi-stage image denoising CNN with the wavelet transform (MWDCNN) via three stages, i.e., a dynamic convolutional block (DCB), two cascaded wavelet transform and enhancement blocks (WEBs) and residual block (RB)
arXiv Detail & Related papers (2022-09-26T03:28:23Z) - Practical Blind Image Denoising via Swin-Conv-UNet and Data Synthesis [148.16279746287452]
We propose a swin-conv block to incorporate the local modeling ability of residual convolutional layer and non-local modeling ability of swin transformer block.
For the training data synthesis, we design a practical noise degradation model which takes into consideration different kinds of noise.
Experiments on AGWN removal and real image denoising demonstrate that the new network architecture design achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-03-24T18:11:31Z) - Removing Noise from Extracellular Neural Recordings Using Fully
Convolutional Denoising Autoencoders [62.997667081978825]
We propose a Fully Convolutional Denoising Autoencoder, which learns to produce a clean neuronal activity signal from a noisy multichannel input.
The experimental results on simulated data show that our proposed method can improve significantly the quality of noise-corrupted neural signals.
arXiv Detail & Related papers (2021-09-18T14:51:24Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Understanding and mitigating noise in trained deep neural networks [0.0]
We study the propagation of noise in deep neural networks comprising noisy nonlinear neurons in trained fully connected layers.
We find that noise accumulation is generally bound, and adding additional network layers does not worsen the signal to noise ratio beyond a limit.
We identify criteria allowing engineers to design noise-resilient novel neural network hardware.
arXiv Detail & Related papers (2021-03-12T17:16:26Z) - Noisy Machines: Understanding Noisy Neural Networks and Enhancing
Robustness to Analog Hardware Errors Using Distillation [12.30062870698165]
We show how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output.
We propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks.
Our method achieves models with as much as two times greater noise tolerance compared with the previous best attempts.
arXiv Detail & Related papers (2020-01-14T18:59:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.