A Riemannian Approach to the Lindbladian Dynamics of a Locally Purified Tensor Network
- URL: http://arxiv.org/abs/2409.08127v1
- Date: Thu, 12 Sep 2024 15:16:15 GMT
- Title: A Riemannian Approach to the Lindbladian Dynamics of a Locally Purified Tensor Network
- Authors: Emiliano Godinez-Ramirez, Richard Milbradt, Christian B. Mendl,
- Abstract summary: We propose a framework for implementing Lindbladian dynamics in many-body open quantum systems with nearest-neighbor couplings.
In this work, we leverage the gauge freedom inherent in the Kraus representation of quantum channels to improve the splitting error.
We validate our approach using two nearest-neighbor noise models and achieve an improvement of orders of magnitude compared to other positivity-preserving schemes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor networks offer a valuable framework for implementing Lindbladian dynamics in many-body open quantum systems with nearest-neighbor couplings. In particular, a tensor network ansatz known as the Locally Purified Density Operator employs the local purification of the density matrix to guarantee the positivity of the state at all times. Within this framework, the dissipative evolution utilizes the Trotter-Suzuki splitting, yielding a second-order approximation error. However, due to the Lindbladian dynamics' nature, employing higher-order schemes results in non-physical quantum channels. In this work, we leverage the gauge freedom inherent in the Kraus representation of quantum channels to improve the splitting error. To this end, we formulate an optimization problem on the Riemannian manifold of isometries and find a solution via the second-order trust-region algorithm. We validate our approach using two nearest-neighbor noise models and achieve an improvement of orders of magnitude compared to other positivity-preserving schemes. In addition, we demonstrate the usefulness of our method as a compression scheme, helping to control the exponential growth of computational resources, which thus far has limited the use of the locally purified ansatz.
Related papers
- Cons-training tensor networks [2.8834278113855896]
We introduce a novel family of tensor networks, termed.
textitconstrained matrix product states (MPS)
These networks incorporate exactly arbitrary discrete linear constraints, including inequalities, into sparse block structures.
These networks are particularly tailored for modeling distributions with support strictly over the feasible space.
arXiv Detail & Related papers (2024-05-15T00:13:18Z) - Tensor Network Representation and Entanglement Spreading in Many-Body
Localized Systems: A Novel Approach [0.0]
A novel method has been devised to compute the Local Integrals of Motion for a one-dimensional many-body localized system.
A class of optimal unitary transformations is deduced in a tensor-network formalism to diagonalize the Hamiltonian of the specified system.
The efficiency of the method was assessed and found to be both fast and almost accurate.
arXiv Detail & Related papers (2023-12-13T14:28:45Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Binarizing Sparse Convolutional Networks for Efficient Point Cloud
Analysis [93.55896765176414]
We propose binary sparse convolutional networks called BSC-Net for efficient point cloud analysis.
We employ the differentiable search strategies to discover the optimal opsitions for active site matching in the shifted sparse convolution.
Our BSC-Net achieves significant improvement upon our srtong baseline and outperforms the state-of-the-art network binarization methods.
arXiv Detail & Related papers (2023-03-27T13:47:06Z) - Lower Bounding Ground-State Energies of Local Hamiltonians Through the Renormalization Group [0.0]
We show how to formulate a tractable convex relaxation of the set of feasible local density matrices of a quantum system.
The coarse-graining maps of the underlying renormalization procedure serve to eliminate a vast number of those constraints.
This can be used to obtain rigorous lower bounds on the ground state energy of arbitrary local Hamiltonians.
arXiv Detail & Related papers (2022-12-06T14:39:47Z) - Learning Representation for Bayesian Optimization with Collision-free
Regularization [13.476552258272402]
Large-scale, high-dimensional, and non-stationary datasets are common in real-world scenarios.
Recent works attempt to handle such input by applying neural networks ahead of the classical Gaussian process to learn a latent representation.
We show that even with proper network design, such learned representation often leads to collision in the latent space.
We propose LOCo, an efficient deep Bayesian optimization framework which employs a novel regularizer to reduce the collision in the learned latent space.
arXiv Detail & Related papers (2022-03-16T14:44:16Z) - Convex Analysis of the Mean Field Langevin Dynamics [49.66486092259375]
convergence rate analysis of the mean field Langevin dynamics is presented.
$p_q$ associated with the dynamics allows us to develop a convergence theory parallel to classical results in convex optimization.
arXiv Detail & Related papers (2022-01-25T17:13:56Z) - A Multisite Decomposition of the Tensor Network Path Integrals [0.0]
We extend the tensor network path integral (TNPI) framework to efficiently simulate quantum systems with local dissipative environments.
The MS-TNPI method is useful for studying a variety of extended quantum systems coupled with solvents.
arXiv Detail & Related papers (2021-09-20T17:55:53Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.