Physical Deep Learning with Biologically Plausible Training Method
- URL: http://arxiv.org/abs/2204.13991v1
- Date: Fri, 1 Apr 2022 05:46:16 GMT
- Title: Physical Deep Learning with Biologically Plausible Training Method
- Authors: Mitsumasa Nakajima, Katsuma Inoue, Kenji Tanaka, Yasuo Kuniyoshi,
Toshikazu Hashimoto, Kohei Nakajima
- Abstract summary: We present physical deep learning by extending a biologically plausible training algorithm called direct feedback alignment.
We can emulate and accelerate the computation for this training on a simple and scalable physical system.
Our results provide practical solutions for the training and acceleration of neuromorphic computation.
- Score: 2.5608506499175094
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The ever-growing demand for further advances in artificial intelligence
motivated research on unconventional computation based on analog physical
devices. While such computation devices mimic brain-inspired analog information
processing, learning procedures still relies on methods optimized for digital
processing such as backpropagation. Here, we present physical deep learning by
extending a biologically plausible training algorithm called direct feedback
alignment. As the proposed method is based on random projection with arbitrary
nonlinear activation, we can train a physical neural network without knowledge
about the physical system. In addition, we can emulate and accelerate the
computation for this training on a simple and scalable physical system. We
demonstrate the proof-of-concept using a hierarchically connected
optoelectronic recurrent neural network called deep reservoir computer. By
constructing an FPGA-assisted optoelectronic benchtop, we confirmed the
potential for accelerated computation with competitive performance on
benchmarks. Our results provide practical solutions for the training and
acceleration of neuromorphic computation.
Related papers
- Analog Alchemy: Neural Computation with In-Memory Inference, Learning and Routing [0.08965418284317034]
I explore an alternative way with memristive devices for neural computation, where the unique physical dynamics of the devices are used for inference, learning and routing.
I will provide hardware evidence of adaptability of local learning to memristive substrates, new material stacks and circuit blocks that aid in solving the credit assignment problem and efficient routing between analog crossbars for scalable architectures.
arXiv Detail & Related papers (2024-12-30T10:35:03Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Training neural networks with end-to-end optical backpropagation [1.1602089225841632]
We show how to implement backpropagation, an algorithm for training a neural network, using optical processes.
Our approach is adaptable to various analog platforms, materials, and network structures.
It demonstrates the possibility of constructing neural networks entirely reliant on analog optical processes for both training and inference tasks.
arXiv Detail & Related papers (2023-08-09T21:11:26Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - The Predictive Forward-Forward Algorithm [79.07468367923619]
We propose the predictive forward-forward (PFF) algorithm for conducting credit assignment in neural systems.
We design a novel, dynamic recurrent neural system that learns a directed generative circuit jointly and simultaneously with a representation circuit.
PFF efficiently learns to propagate learning signals and updates synapses with forward passes only.
arXiv Detail & Related papers (2023-01-04T05:34:48Z) - Spike-based local synaptic plasticity: A survey of computational models
and neuromorphic circuits [1.8464222520424338]
We review historical, bottom-up, and top-down approaches to modeling synaptic plasticity.
We identify computational primitives that can support low-latency and low-power hardware implementations of spike-based learning rules.
arXiv Detail & Related papers (2022-09-30T15:35:04Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Gradient descent in materia through homodyne gradient extraction [2.012950941269354]
We demonstrate a simple yet efficient gradient extraction method, based on the principle of homodyne detection.
By perturbing the parameters that need to be optimized we effectively obtain the gradient information in a highly robust and scalable manner.
Homodyne gradient extraction can in principle be fully implemented in materia, facilitating the development of autonomously learning material systems.
arXiv Detail & Related papers (2021-05-15T12:18:31Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Structural plasticity on an accelerated analog neuromorphic hardware
system [0.46180371154032884]
We present a strategy to achieve structural plasticity by constantly rewiring the pre- and gpostsynaptic partners.
We implemented this algorithm on the analog neuromorphic system BrainScaleS-2.
We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology.
arXiv Detail & Related papers (2019-12-27T10:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.