Noise-injected analog Ising machines enable ultrafast statistical
sampling and machine learning
- URL: http://arxiv.org/abs/2112.11534v1
- Date: Tue, 21 Dec 2021 21:33:45 GMT
- Title: Noise-injected analog Ising machines enable ultrafast statistical
sampling and machine learning
- Authors: Fabian B\"ohm, Diego Alonso-Urquijo, Guy Verschaffelt, Guy Van der
Sande
- Abstract summary: We introduce a universal concept to achieve ultrafast statistical sampling with Ising machines by injecting analog noise.
With an opto-electronic Ising machine, we demonstrate that this can be used for accurate sampling of Boltzmann distributions.
We find that Ising machines can perform statistical sampling orders-of-magnitude faster than software-based methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ising machines are a promising non-von-Neumann computational concept for
neural network training and combinatorial optimization. However, while various
neural networks can be implemented with Ising machines, their inability to
perform fast statistical sampling makes them inefficient for training these
neural networks compared to digital computers. Here, we introduce a universal
concept to achieve ultrafast statistical sampling with Ising machines by
injecting analog noise. With an opto-electronic Ising machine, we demonstrate
that this can be used for accurate sampling of Boltzmann distributions and
unsupervised training of neural networks, with equal accuracy as software-based
training. Through simulations, we find that Ising machines can perform
statistical sampling orders-of-magnitudes faster than software-based methods.
This makes Ising machines into efficient tools for machine learning and other
applications beyond combinatorial optimization.
Related papers
- Training a multilayer dynamical spintronic network with standard machine learning tools to perform time series classification [0.9786690381850356]
We propose to implement a recurrent neural network in hardware using spintronic oscillators as dynamical neurons.
We solve the sequential digits classification task with $89.83pm2.91%$ accuracy, as good as the equivalent software network.
arXiv Detail & Related papers (2024-08-05T21:12:12Z) - Synaptic Sampling of Neural Networks [0.14732811715354452]
This paper describes the scANN technique -- textit (by coinflips) artificial neural networks -- which enables neural networks to be sampled directly by treating the weights as Bernoulli coin flips.
arXiv Detail & Related papers (2023-11-21T22:56:13Z) - Training an Ising Machine with Equilibrium Propagation [2.3848738964230023]
Ising machines are hardware implementations of the Ising model of coupled spins.
In this study, we demonstrate a novel approach to train Ising machines in a supervised way.
Our findings establish Ising machines as a promising trainable hardware platform for AI.
arXiv Detail & Related papers (2023-05-22T15:40:01Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Continual learning autoencoder training for a particle-in-cell
simulation via streaming [52.77024349608834]
upcoming exascale era will provide a new generation of physics simulations with high resolution.
These simulations will have a high resolution, which will impact the training of machine learning models since storing a high amount of simulation data on disk is nearly impossible.
This work presents an approach that trains a neural network concurrently to a running simulation without data on a disk.
arXiv Detail & Related papers (2022-11-09T09:55:14Z) - Ps and Qs: Quantization-aware pruning for efficient low latency neural
network inference [56.24109486973292]
We study the interplay between pruning and quantization during the training of neural networks for ultra low latency applications.
We find that quantization-aware pruning yields more computationally efficient models than either pruning or quantization alone for our task.
arXiv Detail & Related papers (2021-02-22T19:00:05Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Training of mixed-signal optical convolutional neural network with
reduced quantization level [1.3381749415517021]
Mixed-signal artificial neural networks (ANNs) that employ analog matrix-multiplication accelerators can achieve higher speed and improved power efficiency.
Here we report a training method for mixed-signal ANN with two types of errors in its analog signals, random noise, and deterministic errors (distortions)
The results showed that mixed-signal ANNs trained with our proposed method can achieve an equivalent classification accuracy with noise level up to 50% of the ideal quantization step size.
arXiv Detail & Related papers (2020-08-20T20:46:22Z) - Einsum Networks: Fast and Scalable Learning of Tractable Probabilistic
Circuits [99.59941892183454]
We propose Einsum Networks (EiNets), a novel implementation design for PCs.
At their core, EiNets combine a large number of arithmetic operations in a single monolithic einsum-operation.
We show that the implementation of Expectation-Maximization (EM) can be simplified for PCs, by leveraging automatic differentiation.
arXiv Detail & Related papers (2020-04-13T23:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.