Tensor Network Representation and Entanglement Spreading in Many-Body
Localized Systems: A Novel Approach
- URL: http://arxiv.org/abs/2312.08170v1
- Date: Wed, 13 Dec 2023 14:28:45 GMT
- Title: Tensor Network Representation and Entanglement Spreading in Many-Body
Localized Systems: A Novel Approach
- Authors: Z. Gholami, Z. Noorinejad, M. Amini, E. Ghanbari-Adivi
- Abstract summary: A novel method has been devised to compute the Local Integrals of Motion for a one-dimensional many-body localized system.
A class of optimal unitary transformations is deduced in a tensor-network formalism to diagonalize the Hamiltonian of the specified system.
The efficiency of the method was assessed and found to be both fast and almost accurate.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A novel method has been devised to compute the Local Integrals of Motion
(LIOMs) for a one-dimensional many-body localized system. In this approach, a
class of optimal unitary transformations is deduced in a tensor-network
formalism to diagonalize the Hamiltonian of the specified system. To construct
the tensor network, we utilize the eigenstates of the subsystems Hamiltonian to
attain the desired unitary transformations. Subsequently, we optimize the
eigenstates and acquire appropriate unitary localized operators that will
represent the LIOMs tensor network. The efficiency of the method was assessed
and found to be both fast and almost accurate. In framework of the introduced
tensor-network representation, we examine how the entanglement spreads along
the considered many-body localized system and evaluate the outcomes of the
approximations employed in this approach. The important and interesting result
is that in the proposed tensor network approximation, if the length of the
blocks is greater than the length of localization, then the entropy growth will
be linear in terms of the logarithmic time. Also, it has been demonstrated
that, the entanglement can be calculated by only considering two blocks next to
each other, if the Hamiltonian has been diagonalized using the unitary
transformation made by the provided tensor-network representation.
Related papers
- Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Matrix product state fixed points of non-Hermitian transfer matrices [11.686585954351436]
We investigate the impact of gauge degrees of freedom in the virtual indices of the tensor network on the contraction process.
We show that the gauge transformation can affect the entanglement structures of the eigenstates of the transfer matrix.
arXiv Detail & Related papers (2023-11-30T17:28:30Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Regularized scheme of time evolution tensor network algorithms [0.0]
Regularized factorization is proposed to simulate time evolution for quantum lattice systems.
The resulting compact structure of the propagator indicates a high-order Baker-Campbell-Hausdorff series.
arXiv Detail & Related papers (2022-08-06T03:38:37Z) - A tensor network representation of path integrals: Implementation and
analysis [0.0]
We introduce a novel tensor network-based decomposition of path integral simulations involving Feynman-Vernon influence functional.
The finite temporarily non-local interactions introduced by the influence functional can be captured very efficiently using matrix product state representation.
The flexibility of the AP-TNPI framework makes it a promising new addition to the family of path integral methods for non-equilibrium quantum dynamics.
arXiv Detail & Related papers (2021-06-23T16:41:54Z) - Improving Metric Dimensionality Reduction with Distributed Topology [68.8204255655161]
DIPOLE is a dimensionality-reduction post-processing step that corrects an initial embedding by minimizing a loss functional with both a local, metric term and a global, topological term.
We observe that DIPOLE outperforms popular methods like UMAP, t-SNE, and Isomap on a number of popular datasets.
arXiv Detail & Related papers (2021-06-14T17:19:44Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - Optimization at the boundary of the tensor network variety [2.1839191255085995]
tensor network states form a variational ansatz class widely used in the study of quantum many-body systems.
Recent work has shown that states on the boundary of this variety can yield more efficient representations for states of physical interest.
We show how to optimize over this class in order to find ground states of local Hamiltonians.
arXiv Detail & Related papers (2020-06-30T16:58:55Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z) - Controllable Orthogonalization in Training DNNs [96.1365404059924]
Orthogonality is widely used for training deep neural networks (DNNs) due to its ability to maintain all singular values of the Jacobian close to 1.
This paper proposes a computationally efficient and numerically stable orthogonalization method using Newton's iteration (ONI)
We show that our method improves the performance of image classification networks by effectively controlling the orthogonality to provide an optimal tradeoff between optimization benefits and representational capacity reduction.
We also show that ONI stabilizes the training of generative adversarial networks (GANs) by maintaining the Lipschitz continuity of a network, similar to spectral normalization (
arXiv Detail & Related papers (2020-04-02T10:14:27Z) - Efficient variational contraction of two-dimensional tensor networks
with a non-trivial unit cell [0.0]
tensor network states provide an efficient class of states that faithfully capture strongly correlated quantum models and systems.
We generalize a recently proposed variational uniform matrix product state algorithm for capturing one-dimensional quantum lattices.
A key property of the algorithm is a computational effort that scales linearly rather than exponentially in the size of the unit cell.
arXiv Detail & Related papers (2020-03-02T19:01:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.