Adapting reservoir computing to solve the Schr\"odinger equation
- URL: http://arxiv.org/abs/2202.06130v1
- Date: Sat, 12 Feb 2022 19:28:11 GMT
- Title: Adapting reservoir computing to solve the Schr\"odinger equation
- Authors: L. Domingo, J. Borondo and F. Borondo
- Abstract summary: Reservoir computing is a machine learning algorithm that excels at predicting the evolution of time series.
We adapt this methodology to integrate the time-dependent Schr"odinger equation, propagating an initial wavefunction in time.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Reservoir computing is a machine learning algorithm that excels at predicting
the evolution of time series, in particular, dynamical systems. Moreover, it
has also shown superb performance at solving partial differential equations. In
this work, we adapt this methodology to integrate the time-dependent
Schr\"odinger equation, propagating an initial wavefunction in time. Since such
wavefunctions are complex-valued high-dimensional arrays the reservoir
computing formalism needs to be extended to cope with complex-valued data.
Furthermore, we propose a multi-step learning strategy that avoids overfitting
the training data. We illustrate the performance of our adapted reservoir
computing method by application to four standard problems in molecular
vibrational dynamics.
Related papers
- Solving Fractional Differential Equations on a Quantum Computer: A Variational Approach [0.1492582382799606]
We introduce an efficient variational hybrid quantum-classical algorithm designed for solving Caputo time-fractional partial differential equations.
Our results indicate that solution fidelity is insensitive to the fractional index and that gradient evaluation cost scales economically with the number of time steps.
arXiv Detail & Related papers (2024-06-13T02:27:16Z) - Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - Iterative Sketching for Secure Coded Regression [66.53950020718021]
We propose methods for speeding up distributed linear regression.
Specifically, we randomly rotate the basis of the system of equations and then subsample blocks, to simultaneously secure the information and reduce the dimension of the regression problem.
arXiv Detail & Related papers (2023-08-08T11:10:42Z) - Controlling dynamical systems to complex target states using machine
learning: next-generation vs. classical reservoir computing [68.8204255655161]
Controlling nonlinear dynamical systems using machine learning allows to drive systems into simple behavior like periodicity but also to more complex arbitrary dynamics.
We show first that classical reservoir computing excels at this task.
In a next step, we compare those results based on different amounts of training data to an alternative setup, where next-generation reservoir computing is used instead.
It turns out that while delivering comparable performance for usual amounts of training data, next-generation RC significantly outperforms in situations where only very limited data is available.
arXiv Detail & Related papers (2023-07-14T07:05:17Z) - Optimization of a Hydrodynamic Computational Reservoir through Evolution [58.720142291102135]
We interface with a model of a hydrodynamic system, under development by a startup, as a computational reservoir.
We optimized the readout times and how inputs are mapped to the wave amplitude or frequency using an evolutionary search algorithm.
Applying evolutionary methods to this reservoir system substantially improved separability on an XNOR task, in comparison to implementations with hand-selected parameters.
arXiv Detail & Related papers (2023-04-20T19:15:02Z) - Locally Regularized Neural Differential Equations: Some Black Boxes Were
Meant to Remain Closed! [3.222802562733787]
Implicit layer deep learning techniques, like Neural Differential Equations, have become an important modeling framework.
We develop two sampling strategies to trade off between performance and training time.
Our method reduces the number of function evaluations to 0.556-0.733x and accelerates predictions by 1.3-2x.
arXiv Detail & Related papers (2023-03-03T23:31:15Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Accelerating Real-Time Coupled Cluster Methods with Single-Precision
Arithmetic and Adaptive Numerical Integration [3.469636229370366]
We show that single-precision arithmetic reduces both the storage and multiplicative costs of the real-time simulation by approximately a factor of two.
Additional speedups of up to a factor of 14 in test simulations of water clusters are obtained via a straightforward-based implementation.
arXiv Detail & Related papers (2022-05-10T21:21:49Z) - Fast and differentiable simulation of driven quantum systems [58.720142291102135]
We introduce a semi-analytic method based on the Dyson expansion that allows us to time-evolve driven quantum systems much faster than standard numerical methods.
We show results of the optimization of a two-qubit gate using transmon qubits in the circuit QED architecture.
arXiv Detail & Related papers (2020-12-16T21:43:38Z) - Enhancement of shock-capturing methods via machine learning [0.0]
We develop an improved finite-volume method for simulating PDEs with discontinuous solutions.
We train a neural network to improve the results of a fifth-order WENO method.
We find that our method outperforms WENO in simulations where the numerical solution becomes overly diffused.
arXiv Detail & Related papers (2020-02-06T21:51:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.