Boolean learning under noise-perturbations in hardware neural networks
- URL: http://arxiv.org/abs/2003.12319v2
- Date: Fri, 25 Jun 2021 09:41:21 GMT
- Title: Boolean learning under noise-perturbations in hardware neural networks
- Authors: Louis Andreoli, Xavier Porte, St\'ephane Chr\'etien, Maxime Jacquot,
Laurent Larger and Daniel Brunner
- Abstract summary: We find that noise strongly modifies the system's path during convergence, and surprisingly fully decorrelates the final readout weight matrices.
This highlights the importance of understanding architecture, noise and learning algorithm as interacting players.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A high efficiency hardware integration of neural networks benefits from
realizing nonlinearity, network connectivity and learning fully in a physical
substrate. Multiple systems have recently implemented some or all of these
operations, yet the focus was placed on addressing technological challenges.
Fundamental questions regarding learning in hardware neural networks remain
largely unexplored. Noise in particular is unavoidable in such architectures,
and here we investigate its interaction with a learning algorithm using an
opto-electronic recurrent neural network. We find that noise strongly modifies
the system's path during convergence, and surprisingly fully decorrelates the
final readout weight matrices. This highlights the importance of understanding
architecture, noise and learning algorithm as interacting players, and
therefore identifies the need for mathematical tools for noisy, analogue system
optimization.
Related papers
- Deep Learning Meets Sparse Regularization: A Signal Processing
Perspective [17.12783792226575]
We present a mathematical framework that characterizes the functional properties of neural networks that are trained to fit to data.
Key mathematical tools which support this framework include transform-domain sparse regularization, the Radon transform of computed tomography, and approximation theory.
This framework explains the effect of weight decay regularization in neural network training, the use of skip connections and low-rank weight matrices in network architectures, the role of sparsity in neural networks, and explains why neural networks can perform well in high-dimensional problems.
arXiv Detail & Related papers (2023-01-23T17:16:21Z) - Synergistic information supports modality integration and flexible
learning in neural networks solving multiple tasks [107.8565143456161]
We investigate the information processing strategies adopted by simple artificial neural networks performing a variety of cognitive tasks.
Results show that synergy increases as neural networks learn multiple diverse tasks.
randomly turning off neurons during training through dropout increases network redundancy, corresponding to an increase in robustness.
arXiv Detail & Related papers (2022-10-06T15:36:27Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Understanding and mitigating noise in trained deep neural networks [0.0]
We study the propagation of noise in deep neural networks comprising noisy nonlinear neurons in trained fully connected layers.
We find that noise accumulation is generally bound, and adding additional network layers does not worsen the signal to noise ratio beyond a limit.
We identify criteria allowing engineers to design noise-resilient novel neural network hardware.
arXiv Detail & Related papers (2021-03-12T17:16:26Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - An SMT-Based Approach for Verifying Binarized Neural Networks [1.4394939014120451]
We propose an SMT-based technique for verifying Binarized Neural Networks.
One novelty of our technique is that it allows the verification of neural networks that include both binarized and non-binarized components.
We implement our technique as an extension to the Marabou framework, and use it to evaluate the approach on popular binarized neural network architectures.
arXiv Detail & Related papers (2020-11-05T16:21:26Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z) - Noisy Machines: Understanding Noisy Neural Networks and Enhancing
Robustness to Analog Hardware Errors Using Distillation [12.30062870698165]
We show how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output.
We propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks.
Our method achieves models with as much as two times greater noise tolerance compared with the previous best attempts.
arXiv Detail & Related papers (2020-01-14T18:59:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.