A Diffractive Neural Network with Weight-Noise-Injection Training
- URL: http://arxiv.org/abs/2006.04462v3
- Date: Sat, 20 Jun 2020 10:09:27 GMT
- Title: A Diffractive Neural Network with Weight-Noise-Injection Training
- Authors: Jiashuo Shi
- Abstract summary: We propose a diffractive neural network with strong robustness based on Weight Noise Injection training.
It achieves accurate and fast optical-based classification while diffraction layers have a certain amount of surface shape error.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a diffractive neural network with strong robustness based on
Weight Noise Injection training, which achieves accurate and fast optical-based
classification while diffraction layers have a certain amount of surface shape
error. To the best of our knowledge, it is the first time that using injection
weight noise during training to reduce the impact of external interference on
deep learning inference results. In the proposed method, the diffractive neural
network learns the mapping between the input image and the label in Weight
Noise Injection mode, making the network's weight insensitive to modest
changes, which improve the network's noise resistance at a lower cost. By
comparing the accuracy of the network under different noise, it is verified
that the proposed network (SRNN) still maintains a higher accuracy under
serious noise.
Related papers
- Internal noise in hardware deep and recurrent neural networks helps with learning [0.0]
Internal noise during the training of neural networks affects the final performance of recurrent and deep neural networks.
In most cases, both deep and echo state networks benefit from internal noise during training, as it enhances their resilience to noise.
arXiv Detail & Related papers (2025-04-18T16:26:46Z) - A Tunable Despeckling Neural Network Stabilized via Diffusion Equation [15.996302571895045]
Multiplicative Gamma noise remove is a critical research area in the application of synthetic aperture radar (SAR) imaging.
We propose a tunable, regularized neural network that unrolls a denoising unit and a regularization unit into a single network for end-to-end training.
arXiv Detail & Related papers (2024-11-24T17:08:43Z) - Impact of white noise in artificial neural networks trained for classification: performance and noise mitigation strategies [0.0]
We consider how additive and multiplicative Gaussian white noise on the neuronal level can affect the accuracy of the network.
We adapt several noise reduction techniques to the essential setting of classification tasks.
arXiv Detail & Related papers (2024-11-07T01:21:12Z) - Learning Provably Robust Estimators for Inverse Problems via Jittering [51.467236126126366]
We investigate whether jittering, a simple regularization technique, is effective for learning worst-case robust estimators for inverse problems.
We show that jittering significantly enhances the worst-case robustness, but can be suboptimal for inverse problems beyond denoising.
arXiv Detail & Related papers (2023-07-24T14:19:36Z) - A Data-driven Loss Weighting Scheme across Heterogeneous Tasks for Image Denoising [67.02529586335473]
In variational denoising models, weight in the data fidelity term plays the role of enhancing the noise-removal capability.<n>In this work, we propose a data-driven loss weighting scheme to address these issues.<n> Numerical results verify the remarkable performance of DLW on improving the ability of various variational denoising models to handle different complex noise.
arXiv Detail & Related papers (2022-12-09T03:28:07Z) - Neural Clamping: Joint Input Perturbation and Temperature Scaling for Neural Network Calibration [62.4971588282174]
We propose a new post-processing calibration method called Neural Clamping.
Our empirical results show that Neural Clamping significantly outperforms state-of-the-art post-processing calibration methods.
arXiv Detail & Related papers (2022-09-23T14:18:39Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - Robust Learning of Recurrent Neural Networks in Presence of Exogenous
Noise [22.690064709532873]
We propose a tractable robustness analysis for RNN models subject to input noise.
The robustness measure can be estimated efficiently using linearization techniques.
Our proposed methodology significantly improves robustness of recurrent neural networks.
arXiv Detail & Related papers (2021-05-03T16:45:05Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z) - Neural networks with late-phase weights [66.72777753269658]
We show that the solutions found by SGD can be further improved by ensembling a subset of the weights in late stages of learning.
At the end of learning, we obtain back a single model by taking a spatial average in weight space.
arXiv Detail & Related papers (2020-07-25T13:23:37Z) - Robust Processing-In-Memory Neural Networks via Noise-Aware
Normalization [26.270754571140735]
PIM accelerators often suffer from intrinsic noise in the physical components.
We propose a noise-agnostic method to achieve robust neural network performance against any noise setting.
arXiv Detail & Related papers (2020-07-07T06:51:28Z) - Noisy Machines: Understanding Noisy Neural Networks and Enhancing
Robustness to Analog Hardware Errors Using Distillation [12.30062870698165]
We show how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output.
We propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks.
Our method achieves models with as much as two times greater noise tolerance compared with the previous best attempts.
arXiv Detail & Related papers (2020-01-14T18:59:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.