Bayesian optimisation of large-scale photonic reservoir computers
- URL: http://arxiv.org/abs/2004.02535v1
- Date: Mon, 6 Apr 2020 10:11:03 GMT
- Title: Bayesian optimisation of large-scale photonic reservoir computers
- Authors: Piotr Antonik, Nicolas Marsal, Daniel Brunner, Damien Rontani
- Abstract summary: Reservoir computing is a growing paradigm for simplified training of recurrent neural networks.
Recent works in the field focus on large-scale photonic systems with tens of thousands of physical nodes and arbitrary interconnections.
- Score: 0.774229787612056
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Introduction. Reservoir computing is a growing paradigm for simplified
training of recurrent neural networks, with a high potential for hardware
implementations. Numerous experiments in optics and electronics yield
comparable performance to digital state-of-the-art algorithms. Many of the most
recent works in the field focus on large-scale photonic systems, with tens of
thousands of physical nodes and arbitrary interconnections. While this trend
significantly expands the potential applications of photonic reservoir
computing, it also complicates the optimisation of the high number of
hyper-parameters of the system. Methods. In this work, we propose the use of
Bayesian optimisation for efficient exploration of the hyper-parameter space in
a minimum number of iteration. Results. We test this approach on a previously
reported large-scale experimental system, compare it to the commonly used grid
search, and report notable improvements in performance and the number of
experimental iterations required to optimise the hyper-parameters. Conclusion.
Bayesian optimisation thus has the potential to become the standard method for
tuning the hyper-parameters in photonic reservoir computing.
Related papers
- Bayesian Optimization for Hyperparameters Tuning in Neural Networks [0.0]
Bayesian Optimization is a derivative-free global optimization method suitable for black-box functions with continuous inputs and limited evaluation budgets.
This study investigates the application of BO for the hyper parameter tuning of neural networks, specifically targeting the enhancement of Convolutional Neural Networks (CNN)
Experimental outcomes reveal that BO effectively balances exploration and exploitation, converging rapidly towards optimal settings for CNN architectures.
This approach underlines the potential of BO in automating neural network tuning, contributing to improved accuracy and computational efficiency in machine learning pipelines.
arXiv Detail & Related papers (2024-10-29T09:23:24Z) - Benchmarking Optimizers for Qumode State Preparation with Variational Quantum Algorithms [10.941053143198092]
There has been a growing interest in qumodes due to advancements in the field and their potential applications.
This paper aims to bridge this gap by providing performance benchmarks of various parameters used in state preparation with Variational Quantum Algorithms.
arXiv Detail & Related papers (2024-05-07T17:15:58Z) - Model-aware reinforcement learning for high-performance Bayesian
experimental design in quantum metrology [0.5461938536945721]
Quantum sensors offer control flexibility during estimation by allowing manipulation by the experimenter across various parameters.
We introduce a versatile procedure capable of optimizing a wide range of problems in quantum metrology, estimation, and hypothesis testing.
We combine model-aware reinforcement learning (RL) with Bayesian estimation based on particle filtering.
arXiv Detail & Related papers (2023-12-28T12:04:15Z) - Deep Bayesian Experimental Design for Quantum Many-Body Systems [0.0]
We show how this approach holds promise for adaptive measurement strategies to characterize present-day quantum technology platforms.
In particular, we focus on arrays of coupled cavities and qubit arrays.
arXiv Detail & Related papers (2023-06-26T08:40:14Z) - Convergence and scaling of Boolean-weight optimization for hardware
reservoirs [0.0]
We analytically derive the scaling laws for highly efficient Coordinate Descent applied to optimize the readout layer of a random recurrently connection neural network.
Our results perfectly reproduce the convergence and scaling of a large-scale photonic reservoir implemented in a proof-of-concept experiment.
arXiv Detail & Related papers (2023-05-13T12:15:25Z) - Optimization of a Hydrodynamic Computational Reservoir through Evolution [58.720142291102135]
We interface with a model of a hydrodynamic system, under development by a startup, as a computational reservoir.
We optimized the readout times and how inputs are mapped to the wave amplitude or frequency using an evolutionary search algorithm.
Applying evolutionary methods to this reservoir system substantially improved separability on an XNOR task, in comparison to implementations with hand-selected parameters.
arXiv Detail & Related papers (2023-04-20T19:15:02Z) - Towards Learning Universal Hyperparameter Optimizers with Transformers [57.35920571605559]
We introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction.
Our experiments demonstrate that the OptFormer can imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates.
arXiv Detail & Related papers (2022-05-26T12:51:32Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Optimal Bayesian experimental design for subsurface flow problems [77.34726150561087]
We propose a novel approach for development of chaos expansion (PCE) surrogate model for the design utility function.
This novel technique enables the derivation of a reasonable quality response surface for the targeted objective function with a computational budget comparable to several single-point evaluations.
arXiv Detail & Related papers (2020-08-10T09:42:59Z) - An Asymptotically Optimal Multi-Armed Bandit Algorithm and
Hyperparameter Optimization [48.5614138038673]
We propose an efficient and robust bandit-based algorithm called Sub-Sampling (SS) in the scenario of hyper parameter search evaluation.
We also develop a novel hyper parameter optimization algorithm called BOSS.
Empirical studies validate our theoretical arguments of SS and demonstrate the superior performance of BOSS on a number of applications.
arXiv Detail & Related papers (2020-07-11T03:15:21Z) - Large Batch Training Does Not Need Warmup [111.07680619360528]
Training deep neural networks using a large batch size has shown promising results and benefits many real-world applications.
In this paper, we propose a novel Complete Layer-wise Adaptive Rate Scaling (CLARS) algorithm for large-batch training.
Based on our analysis, we bridge the gap and illustrate the theoretical insights for three popular large-batch training techniques.
arXiv Detail & Related papers (2020-02-04T23:03:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.