A Pairwise Connected Tensor Network Representation of Path Integrals
- URL: http://arxiv.org/abs/2106.14934v4
- Date: Tue, 6 Jul 2021 17:09:02 GMT
- Title: A Pairwise Connected Tensor Network Representation of Path Integrals
- Authors: Amartya Bose
- Abstract summary: It has been recently shown how the tensorial nature of real-time path integrals involving the Feynman-Vernon influence functional can be utilized.
Here, a generalized tensor network is derived and implemented specifically incorporating the pairwise interaction structure of the influence functional.
This pairwise connected tensor network path integral (PCTNPI) is illustrated through applications to typical spin-boson problems and explorations of the differences caused by the exact form of the spectral density.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It has been recently shown how the tensorial nature of real-time path
integrals involving the Feynman-Vernon influence functional can be utilized
using matrix product states, taking advantage of the finite length of the
non-Markovian memory. Tensor networks promise to provide a new, unified
language to express the structure of path integral. Here, a generalized tensor
network is derived and implemented specifically incorporating the pairwise
interaction structure of the influence functional, allowing for a compact
representation and efficient evaluation. This pairwise connected tensor network
path integral (PCTNPI) is illustrated through applications to typical
spin-boson problems and explorations of the differences caused by the exact
form of the spectral density. The storage requirements and performance are
compared with iterative quasi-adiabatic propagator path integral and iterative
blip-summed path integral. Finally, the viability of using PCTNPI for
simulating multistate problems is demonstrated taking advantage of the
compressed representation.
Related papers
- Neural Control Variates with Automatic Integration [49.91408797261987]
This paper proposes a novel approach to construct learnable parametric control variates functions from arbitrary neural network architectures.
We use the network to approximate the anti-derivative of the integrand.
We apply our method to solve partial differential equations using the Walk-on-sphere algorithm.
arXiv Detail & Related papers (2024-09-23T06:04:28Z) - Path-metrics, pruning, and generalization [13.894485461969772]
This paper proves a new bound on function in terms of the so-called path-metrics of the parameters.
It is the first bound of its kind that is broadly applicable to modern networks such as ResNets, VGGs, U-nets, and many more.
arXiv Detail & Related papers (2024-05-23T19:23:09Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Quantum correlation functions through tensor network path integral [0.0]
tensor networks are utilized for calculating equilibrium correlation function for open quantum systems.
The influence of the solvent on the quantum system is incorporated through an influence functional.
The design and implementation of this method is discussed along with illustrations from rate theory, symmetrized spin correlation functions, dynamical susceptibility calculations and quantum thermodynamics.
arXiv Detail & Related papers (2023-08-21T07:46:51Z) - Approximation and interpolation of deep neural networks [0.0]
In the overparametrized regime, deep neural network provide universal approximations and can interpolate any data set.
In the last section, we provide a practical probabilistic method of finding such a point under general conditions on the activation function.
arXiv Detail & Related papers (2023-04-20T08:45:16Z) - Arithmetic circuit tensor networks, multivariable function
representation, and high-dimensional integration [0.0]
We introduce a direct mapping from the arithmetic circuit of a function to circuit tensor networks.
We demonstrate the power of the circuit construction in examples of multivariable integration on the unit hypercube in up to 50 dimensions.
arXiv Detail & Related papers (2022-08-16T23:02:14Z) - Entangled Residual Mappings [59.02488598557491]
We introduce entangled residual mappings to generalize the structure of the residual connections.
An entangled residual mapping replaces the identity skip connections with specialized entangled mappings.
We show that while entangled mappings can preserve the iterative refinement of features across various deep models, they influence the representation learning process in convolutional networks.
arXiv Detail & Related papers (2022-06-02T19:36:03Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - A tensor network representation of path integrals: Implementation and
analysis [0.0]
We introduce a novel tensor network-based decomposition of path integral simulations involving Feynman-Vernon influence functional.
The finite temporarily non-local interactions introduced by the influence functional can be captured very efficiently using matrix product state representation.
The flexibility of the AP-TNPI framework makes it a promising new addition to the family of path integral methods for non-equilibrium quantum dynamics.
arXiv Detail & Related papers (2021-06-23T16:41:54Z) - Neural Operator: Graph Kernel Network for Partial Differential Equations [57.90284928158383]
This work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators)
We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators.
Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.
arXiv Detail & Related papers (2020-03-07T01:56:20Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.