Training an Ising Machine with Equilibrium Propagation
- URL: http://arxiv.org/abs/2305.18321v1
- Date: Mon, 22 May 2023 15:40:01 GMT
- Title: Training an Ising Machine with Equilibrium Propagation
- Authors: J\'er\'emie Laydevant, Danijela Markovic, Julie Grollier
- Abstract summary: Ising machines are hardware implementations of the Ising model of coupled spins.
In this study, we demonstrate a novel approach to train Ising machines in a supervised way.
Our findings establish Ising machines as a promising trainable hardware platform for AI.
- Score: 2.3848738964230023
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ising machines, which are hardware implementations of the Ising model of
coupled spins, have been influential in the development of unsupervised
learning algorithms at the origins of Artificial Intelligence (AI). However,
their application to AI has been limited due to the complexities in matching
supervised training methods with Ising machine physics, even though these
methods are essential for achieving high accuracy. In this study, we
demonstrate a novel approach to train Ising machines in a supervised way
through the Equilibrium Propagation algorithm, achieving comparable results to
software-based implementations. We employ the quantum annealing procedure of
the D-Wave Ising machine to train a fully-connected neural network on the MNIST
dataset. Furthermore, we demonstrate that the machine's connectivity supports
convolution operations, enabling the training of a compact convolutional
network with minimal spins per neuron. Our findings establish Ising machines as
a promising trainable hardware platform for AI, with the potential to enhance
machine learning applications.
Related papers
- Uncertainty Estimation in Multi-Agent Distributed Learning for AI-Enabled Edge Devices [0.0]
Edge IoT devices have seen a paradigm shift with the introduction of FPGAs and AI accelerators.
This advancement has vastly amplified their computational capabilities, emphasizing the practicality of edge AI.
Our study explores methods that enable distributed data processing through AI-enabled edge devices, enhancing collaborative learning capabilities.
arXiv Detail & Related papers (2024-03-14T07:40:32Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - A general learning scheme for classical and quantum Ising machines [0.0]
We propose a new machine learning model that is based on the Ising structure and can be efficiently trained using gradient descent.
We present some experimental results on the training and execution of the proposed learning model.
In particular, in the quantum realm, the quantum resources are used for both the execution and the training of the model, providing a promising perspective in quantum machine learning.
arXiv Detail & Related papers (2023-10-27T18:07:02Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - Prospects of federated machine learning in fluid dynamics [0.0]
In recent years, machine learning has offered a renaissance to the fluid community due to the rapid developments in data science.
In this letter, we present a federated machine learning approach that enables localized to collaboratively learn an aggregated shared predictive model.
We demonstrate the feasibility and prospects of such decentralized learning approach with an effort to forge a deep learning surrogate model for reconstructingtemporal fields.
arXiv Detail & Related papers (2022-08-15T06:15:04Z) - Noise-injected analog Ising machines enable ultrafast statistical
sampling and machine learning [0.0]
We introduce a universal concept to achieve ultrafast statistical sampling with Ising machines by injecting analog noise.
With an opto-electronic Ising machine, we demonstrate that this can be used for accurate sampling of Boltzmann distributions.
We find that Ising machines can perform statistical sampling orders-of-magnitude faster than software-based methods.
arXiv Detail & Related papers (2021-12-21T21:33:45Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Learning to Walk: Spike Based Reinforcement Learning for Hexapod Robot
Central Pattern Generation [2.4603302139672003]
Methods such as gradient, deep reinforcement learning (RL) have been explored for bipeds, quadrupeds and hexapods.
Recent advances in spiking neural networks (SNNs) promise a significant reduction in computing owing to the sparse firing of neuros.
We propose a reinforcement based weight update technique for training a spiking pattern generator.
arXiv Detail & Related papers (2020-03-22T23:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.