Variational adiabatic transport of tensor networks
- URL: http://arxiv.org/abs/2311.00748v2
- Date: Tue, 28 Nov 2023 19:00:04 GMT
- Title: Variational adiabatic transport of tensor networks
- Authors: Hyeongjin Kim, Matthew T. Fishman, Dries Sels
- Abstract summary: We discuss a tensor network method for constructing the adiabatic gauge potential as a matrix product operator.
We show that we can reliably transport states through the critical point of the models we study.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We discuss a tensor network method for constructing the adiabatic gauge
potential -- the generator of adiabatic transformations -- as a matrix product
operator, which allows us to adiabatically transport matrix product states.
Adiabatic evolution of tensor networks offers a wide range of applications, of
which two are explored in this paper: improving tensor network optimization and
scanning phase diagrams. By efficiently transporting eigenstates to quantum
criticality and performing intermediary density matrix renormalization group
(DMRG) optimizations along the way, we demonstrate that we can compute ground
and low-lying excited states faster and more reliably than a standard DMRG
method at or near quantum criticality. We demonstrate a simple automated step
size adjustment and detection of the critical point based on the norm of the
adiabatic gauge potential. Remarkably, we are able to reliably transport states
through the critical point of the models we study.
Related papers
- Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces a novel family of deep dynamical models designed to represent continuous-time sequence data.
We train the model using maximum likelihood estimation with Markov chain Monte Carlo.
Experiments on oscillating systems, videos and real-world state sequences (MuJoCo) illustrate that ODEs with the learnable energy-based prior outperform existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Neutron-nucleus dynamics simulations for quantum computers [49.369935809497214]
We develop a novel quantum algorithm for neutron-nucleus simulations with general potentials.
It provides acceptable bound-state energies even in the presence of noise, through the noise-resilient training method.
We introduce a new commutativity scheme called distance-grouped commutativity (DGC) and compare its performance with the well-known qubit-commutativity scheme.
arXiv Detail & Related papers (2024-02-22T16:33:48Z) - Automatic structural optimization of tree tensor networks [0.0]
We propose a TTN algorithm that enables us to automatically optimize the network structure by local reconnections of isometries.
We demonstrate that the entanglement structure embedded in the ground-state of the system can be efficiently visualized as a perfect binary tree in the optimized TTN.
arXiv Detail & Related papers (2022-09-07T14:51:39Z) - Transfer-matrix summation of path integrals for transport through
nanostructures [62.997667081978825]
We develop a transfer-matrix method to describe the nonequilibrium properties of interacting quantum-dot systems.
The method is referred to as "transfer-matrix summation of path integrals" (TraSPI)
arXiv Detail & Related papers (2022-08-16T09:13:19Z) - Regularized scheme of time evolution tensor network algorithms [0.0]
Regularized factorization is proposed to simulate time evolution for quantum lattice systems.
The resulting compact structure of the propagator indicates a high-order Baker-Campbell-Hausdorff series.
arXiv Detail & Related papers (2022-08-06T03:38:37Z) - Positive-definite parametrization of mixed quantum states with deep
neural networks [0.0]
We show how to embed an autoregressive structure in the GHDO to allow direct sampling of the probability distribution.
We benchmark this architecture by the steady state of the dissipative transverse-field Ising model.
arXiv Detail & Related papers (2022-06-27T17:51:38Z) - Learning Generative Vision Transformer with Energy-Based Latent Space
for Saliency Prediction [51.80191416661064]
We propose a novel vision transformer with latent variables following an informative energy-based prior for salient object detection.
Both the vision transformer network and the energy-based prior model are jointly trained via Markov chain Monte Carlo-based maximum likelihood estimation.
With the generative vision transformer, we can easily obtain a pixel-wise uncertainty map from an image, which indicates the model confidence in predicting saliency from the image.
arXiv Detail & Related papers (2021-12-27T06:04:33Z) - Simulating thermal density operators with cluster expansions and tensor
networks [0.0]
We benchmark this cluster tensor network operator (cluster TNO) for one-dimensional systems.
We use this formalism for representing the thermal density operator of a two-dimensional quantum spin system at a certain temperature as a single cluster TNO.
We find through a scaling analysis that the cluster-TNO approximation gives rise to a continuous phase transition in the correct universality class.
arXiv Detail & Related papers (2021-12-02T18:56:44Z) - Boundary theories of critical matchgate tensor networks [59.433172590351234]
Key aspects of the AdS/CFT correspondence can be captured in terms of tensor network models on hyperbolic lattices.
For tensors fulfilling the matchgate constraint, these have previously been shown to produce disordered boundary states.
We show that these Hamiltonians exhibit multi-scale quasiperiodic symmetries captured by an analytical toy model.
arXiv Detail & Related papers (2021-10-06T18:00:03Z) - Quantum-inspired event reconstruction with Tensor Networks: Matrix
Product States [0.0]
We show that Networks are ideal vehicles to connect quantum mechanical concepts to machine learning techniques.
We show that entanglement entropy can be used to interpret what a network learns.
arXiv Detail & Related papers (2021-06-15T18:00:02Z) - Pruning Redundant Mappings in Transformer Models via Spectral-Normalized
Identity Prior [54.629850694790036]
spectral-normalized identity priors (SNIP) is a structured pruning approach that penalizes an entire residual module in a Transformer model toward an identity mapping.
We conduct experiments with BERT on 5 GLUE benchmark tasks to demonstrate that SNIP achieves effective pruning results while maintaining comparable performance.
arXiv Detail & Related papers (2020-10-05T05:40:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.