Machine Learning Regression for Operator Dynamics
- URL: http://arxiv.org/abs/2102.11868v1
- Date: Tue, 23 Feb 2021 18:58:04 GMT
- Title: Machine Learning Regression for Operator Dynamics
- Authors: Justin Reyes, Sayandip Dhara, Eduardo R. Mucciolo
- Abstract summary: We present a solution for efficiently extending the computation of expectation values to long time intervals.
We utilize a multi-layer perceptron (MLP) model as a tool for regression on expectation values calculated within the regime of short time intervals.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Determining the dynamics of the expectation values for operators acting on a
quantum many-body (QMB) system is a challenging task. Matrix product states
(MPS) have traditionally been the "go-to" models for these systems because
calculating expectation values in this representation can be done with relative
simplicity and high accuracy. However, such calculations can become
computationally costly when extended to long times. Here, we present a solution
for efficiently extending the computation of expectation values to long time
intervals. We utilize a multi-layer perceptron (MLP) model as a tool for
regression on MPS expectation values calculated within the regime of short time
intervals. With this model, the computational cost of generating long-time
dynamics is significantly reduced, while maintaining a high accuracy. These
results are demonstrated with operators relevant to quantum spin models in one
spatial dimension.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - MixLinear: Extreme Low Resource Multivariate Time Series Forecasting with 0.1K Parameters [6.733646592789575]
Long-term Time Series Forecasting (LTSF) involves predicting long-term values by analyzing a large amount of historical time-series data to identify patterns and trends.
Transformer-based models offer high forecasting accuracy, but they are often too compute-intensive to be deployed on devices with hardware constraints.
We propose MixLinear, an ultra-lightweight time series forecasting model specifically designed for resource-constrained devices.
arXiv Detail & Related papers (2024-10-02T23:04:57Z) - Integration of Mamba and Transformer -- MAT for Long-Short Range Time Series Forecasting with Application to Weather Dynamics [7.745945701278489]
Long-short range time series forecasting is essential for predicting future trends and patterns over extended periods.
Deep learning models such as Transformers have made significant strides in advancing time series forecasting.
This article examines the advantages and disadvantages of both Mamba and Transformer models.
arXiv Detail & Related papers (2024-09-13T04:23:54Z) - Statistical and machine learning approaches for prediction of long-time
excitation energy transfer dynamics [0.0]
The objective here is to demonstrate whether models such as SARIMA, CatBoost, Prophet, convolutional and recurrent neural networks are able to bypass this requirement.
Our results suggest that the SARIMA model can serve as a computationally inexpensive yet accurate way to predict long-time dynamics.
arXiv Detail & Related papers (2022-10-25T16:50:26Z) - Deep Convolutional Architectures for Extrapolative Forecast in
Time-dependent Flow Problems [0.0]
Deep learning techniques are employed to model the system dynamics for advection dominated problems.
These models take as input a sequence of high-fidelity vector solutions for consecutive time-steps obtained from the PDEs.
Non-intrusive reduced-order modelling techniques such as deep auto-encoder networks are utilized to compress the high-fidelity snapshots.
arXiv Detail & Related papers (2022-09-18T03:45:56Z) - A comparative study of different machine learning methods for
dissipative quantum dynamics [0.0]
We show that supervised machine learning algorithms can accurately and efficiently predict the long-time populations dynamics of dissipative quantum systems.
We benchmaked 22 ML models on their ability to predict long-time dynamics of a two-level quantum system linearly coupled to harmonic bath.
arXiv Detail & Related papers (2022-07-06T03:37:24Z) - Fast and differentiable simulation of driven quantum systems [58.720142291102135]
We introduce a semi-analytic method based on the Dyson expansion that allows us to time-evolve driven quantum systems much faster than standard numerical methods.
We show results of the optimization of a two-qubit gate using transmon qubits in the circuit QED architecture.
arXiv Detail & Related papers (2020-12-16T21:43:38Z) - Stochastically forced ensemble dynamic mode decomposition for
forecasting and analysis of near-periodic systems [65.44033635330604]
We introduce a novel load forecasting method in which observed dynamics are modeled as a forced linear system.
We show that its use of intrinsic linear dynamics offers a number of desirable properties in terms of interpretability and parsimony.
Results are presented for a test case using load data from an electrical grid.
arXiv Detail & Related papers (2020-10-08T20:25:52Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.