Online training for high-performance analogue readout layers in photonic
reservoir computers
- URL: http://arxiv.org/abs/2012.10613v1
- Date: Sat, 19 Dec 2020 07:12:26 GMT
- Title: Online training for high-performance analogue readout layers in photonic
reservoir computers
- Authors: Piotr Antonik, Marc Haelterman, Serge Massar
- Abstract summary: Reservoir Computing is a bio-inspired computing paradigm for processing time-dependent signals.
The major bottleneck of these implementation is the readout layer, based on slow offline post-processing.
Here we propose the use of online training to solve these issues.
- Score: 2.6104700758143666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Introduction. Reservoir Computing is a bio-inspired computing paradigm for
processing time-dependent signals. The performance of its hardware
implementation is comparable to state-of-the-art digital algorithms on a series
of benchmark tasks. The major bottleneck of these implementation is the readout
layer, based on slow offline post-processing. Few analogue solutions have been
proposed, but all suffered from notice able decrease in performance due to
added complexity of the setup. Methods. Here we propose the use of online
training to solve these issues. We study the applicability of this method using
numerical simulations of an experimentally feasible reservoir computer with an
analogue readout layer. We also consider a nonlinear output layer, which would
be very difficult to train with traditional methods. Results. We show
numerically that online learning allows to circumvent the added complexity of
the analogue layer and obtain the same level of performance as with a digital
layer. Conclusion. This work paves the way to high-performance fully-analogue
reservoir computers through the use of online training of the output layers.
Related papers
- Deep Photonic Reservoir Computer for Speech Recognition [49.1574468325115]
Speech recognition is a critical task in the field of artificial intelligence and has witnessed remarkable advancements.
Deep reservoir computing is energy efficient but exhibits limitations in performance when compared to more resource-intensive machine learning algorithms.
We propose a photonic-based deep reservoir computer and evaluate its effectiveness on different speech recognition tasks.
arXiv Detail & Related papers (2023-12-11T17:43:58Z) - Biologically Plausible Learning on Neuromorphic Hardware Architectures [27.138481022472]
Neuromorphic computing is an emerging paradigm that confronts this imbalance by computations directly in analog memories.
This work is the first to compare the impact of different learning algorithms on Compute-In-Memory-based hardware and vice versa.
arXiv Detail & Related papers (2022-12-29T15:10:59Z) - Simulation-Based Parallel Training [55.41644538483948]
We present our ongoing work to design a training framework that alleviates those bottlenecks.
It generates data in parallel with the training process.
We present a strategy to mitigate this bias with a memory buffer.
arXiv Detail & Related papers (2022-11-08T09:31:25Z) - PARTIME: Scalable and Parallel Processing Over Time with Deep Neural
Networks [68.96484488899901]
We present PARTIME, a library designed to speed up neural networks whenever data is continuously streamed over time.
PARTIME starts processing each data sample at the time in which it becomes available from the stream.
Experiments are performed in order to empirically compare PARTIME with classic non-parallel neural computations in online learning.
arXiv Detail & Related papers (2022-10-17T14:49:14Z) - Deep Q-network using reservoir computing with multi-layered readout [0.0]
Recurrent neural network (RNN) based reinforcement learning (RL) is used for learning context-dependent tasks.
An approach with replay memory introducing reservoir computing has been proposed, which trains an agent without BPTT.
This paper shows that the performance of this method improves by using a multi-layered neural network for the readout layer.
arXiv Detail & Related papers (2022-03-03T00:32:55Z) - LayerPipe: Accelerating Deep Neural Network Training by Intra-Layer and
Inter-Layer Gradient Pipelining and Multiprocessor Scheduling [6.549125450209931]
Training model parameters by backpropagation inherently create feedback loops.
The proposed system, referred to as LayerPipe, reduces the number of clock cycles required for training.
arXiv Detail & Related papers (2021-08-14T23:51:00Z) - On the Utility of Gradient Compression in Distributed Training Systems [9.017890174185872]
We evaluate the efficacy of gradient compression methods and compare their scalability with optimized implementations of synchronous data-parallel SGD.
Surprisingly, we observe that due to computation overheads introduced by gradient compression, the net speedup over vanilla data-parallel training is marginal, if not negative.
arXiv Detail & Related papers (2021-02-28T15:58:45Z) - Random pattern and frequency generation using a photonic reservoir
computer with output feedback [3.0395687958102937]
Reservoir computing is a bio-inspired computing paradigm for processing time dependent signals.
We demonstrate the first opto-electronic reservoir computer with output feedback and test it on two examples of time series generation tasks.
arXiv Detail & Related papers (2020-12-19T07:26:32Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z) - Understanding the Effects of Data Parallelism and Sparsity on Neural
Network Training [126.49572353148262]
We study two factors in neural network training: data parallelism and sparsity.
Despite their promising benefits, understanding of their effects on neural network training remains elusive.
arXiv Detail & Related papers (2020-03-25T10:49:22Z) - Accelerating Feedforward Computation via Parallel Nonlinear Equation
Solving [106.63673243937492]
Feedforward computation, such as evaluating a neural network or sampling from an autoregressive model, is ubiquitous in machine learning.
We frame the task of feedforward computation as solving a system of nonlinear equations. We then propose to find the solution using a Jacobi or Gauss-Seidel fixed-point method, as well as hybrid methods of both.
Our method is guaranteed to give exactly the same values as the original feedforward computation with a reduced (or equal) number of parallelizable iterations, and hence reduced time given sufficient parallel computing power.
arXiv Detail & Related papers (2020-02-10T10:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.