Extracting Dynamical Maps of Non-Markovian Open Quantum Systems
- URL: http://arxiv.org/abs/2409.17051v2
- Date: Fri, 25 Oct 2024 14:29:55 GMT
- Title: Extracting Dynamical Maps of Non-Markovian Open Quantum Systems
- Authors: David J. Strachan, Archak Purkayastha, Stephen R. Clark,
- Abstract summary: We show thatLambda(tau)$ arises from suddenly coupling a system to one or more thermal baths with a strength that is neither weak nor strong.
We employ the Choi-Jamiolkowski isomorphism so that $hatLambda(tau)$ can be fully reconstructed.
Our numerical examples of interacting spinless Fermi chains and the single impurity Anderson model demonstrate regimes where our approach can offer a significant speedup.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The most general description of quantum evolution up to a time $\tau$ is a completely positive tracing preserving map known as a dynamical map $\hat{\Lambda}(\tau)$. Here we consider $\hat{\Lambda}(\tau)$ arising from suddenly coupling a system to one or more thermal baths with a strength that is neither weak nor strong. Given no clear separation of characteristic system/bath time scales $\hat{\Lambda}(\tau)$ is generically expected to be non-Markovian, however we do assume the ensuing dynamics has a unique steady state implying the baths possess a finite memory time $\tau_{\rm m}$. By combining several techniques within a tensor network framework we directly and accurately extract $\hat{\Lambda}(\tau)$ for a small number of interacting fermionic modes coupled to infinite non-interacting Fermi baths. We employ the Choi-Jamiolkowski isomorphism so that $\hat{\Lambda}(\tau)$ can be fully reconstructed from a single pure state calculation of the unitary dynamics of the system, bath and their replica auxillary modes up to time $\tau$. From $\hat{\Lambda}(\tau)$ we also compute the time local propagator $\hat{\mathcal{L}}(\tau)$. By examining the convergence with $\tau$ of the instantaneous fixed points of these objects we establish their respective memory times $\tau^{\Lambda}_{\rm m}$ and $\tau^{\mathcal{L}}_{\rm m}$. Beyond these times, the propagator $\hat{\mathcal{L}}(\tau)$ and dynamical map $\hat{\Lambda}(\tau)$ accurately describe all the subsequent long-time relaxation dynamics up to stationarity. Our numerical examples of interacting spinless Fermi chains and the single impurity Anderson model demonstrate regimes where our approach can offer a significant speedup in determining the stationary state compared to directly simulating the long-time limit.
Related papers
- The Thermodynamic Cost of Ignorance: Thermal State Preparation with One Ancilla Qubit [0.5729426778193399]
We investigate a model of thermalization wherein a single ancillary qubit randomly interacts with the system to be thermalized.
This not only sheds light on the emergence of Gibbs states in nature, but also provides a routine for preparing arbitrary thermal states on a digital quantum computer.
arXiv Detail & Related papers (2025-02-05T17:50:37Z) - Exact Solvability Of Entanglement For Arbitrary Initial State in an Infinite-Range Floquet System [0.5371337604556311]
We introduce an $N$-spin Floquet model with infinite-range Ising interactions.
We numerically show that the values $langle Srangle/S_Max rightarrow 1$ for Ising strength deviates from $1$ for arbitrary initial states even though the thermodynamic limit does not exist in our model.
arXiv Detail & Related papers (2024-11-25T18:55:05Z) - Response theory for locally gapped systems [0.0]
We introduce a notion of a emphlocal gap for interacting many-body quantum lattice systems.
We prove the validity of response theory and Kubo's formula for localized perturbations in such settings.
arXiv Detail & Related papers (2024-10-14T17:59:29Z) - A shortcut to an optimal quantum linear system solver [55.2480439325792]
We give a conceptually simple quantum linear system solvers (QLSS) that does not use complex or difficult-to-analyze techniques.
If the solution norm $lVertboldsymbolxrVert$ is known exactly, our QLSS requires only a single application of kernel.
Alternatively, by reintroducing a concept from the adiabatic path-following technique, we show that $O(kappa)$ complexity can be achieved for norm estimation.
arXiv Detail & Related papers (2024-06-17T20:54:11Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - On the $O(\frac{\sqrt{d}}{T^{1/4}})$ Convergence Rate of RMSProp and Its Momentum Extension Measured by $\ell_1$ Norm [59.65871549878937]
This paper considers the RMSProp and its momentum extension and establishes the convergence rate of $frac1Tsum_k=1T.
Our convergence rate matches the lower bound with respect to all the coefficients except the dimension $d$.
Our convergence rate can be considered to be analogous to the $frac1Tsum_k=1T.
arXiv Detail & Related papers (2024-02-01T07:21:32Z) - Near-continuous time Reinforcement Learning for continuous state-action
spaces [3.5527561584422456]
We consider the Reinforcement Learning problem of controlling an unknown dynamical system to maximise the long-term average reward along a single trajectory.
Most of the literature considers system interactions that occur in discrete time and discrete state-action spaces.
We show that the celebrated optimism protocol applies when the sub-tasks (learning and planning) can be performed effectively.
arXiv Detail & Related papers (2023-09-06T08:01:17Z) - Reward-Mixing MDPs with a Few Latent Contexts are Learnable [75.17357040707347]
We consider episodic reinforcement learning in reward-mixing Markov decision processes (RMMDPs)
Our goal is to learn a near-optimal policy that nearly maximizes the $H$ time-step cumulative rewards in such a model.
arXiv Detail & Related papers (2022-10-05T22:52:00Z) - Sharper Convergence Guarantees for Asynchronous SGD for Distributed and
Federated Learning [77.22019100456595]
We show a training algorithm for distributed computation workers with varying communication frequency.
In this work, we obtain a tighter convergence rate of $mathcalO!!!(sigma2-2_avg!! .
We also show that the heterogeneity term in rate is affected by the average delay within each worker.
arXiv Detail & Related papers (2022-06-16T17:10:57Z) - Dynamics of Open Quantum Systems II, Markovian Approximation [0.0]
We show that for fixed, small values of the coupling constant $lambda$, the true reduced dynamics of the system is approximated by the Davies-Lindblad generator.
The difference between the true and the Markovian dynamics is $O(lambda|1/4)$ for all times.
arXiv Detail & Related papers (2021-04-30T18:09:35Z) - Linear Time Sinkhorn Divergences using Positive Features [51.50788603386766]
Solving optimal transport with an entropic regularization requires computing a $ntimes n$ kernel matrix that is repeatedly applied to a vector.
We propose to use instead ground costs of the form $c(x,y)=-logdotpvarphi(x)varphi(y)$ where $varphi$ is a map from the ground space onto the positive orthant $RRr_+$, with $rll n$.
arXiv Detail & Related papers (2020-06-12T10:21:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.