Correlations in Disordered Solvable Tensor Network States
- URL: http://arxiv.org/abs/2309.04776v1
- Date: Sat, 9 Sep 2023 12:31:22 GMT
- Title: Correlations in Disordered Solvable Tensor Network States
- Authors: Daniel Haag, Richard M. Milbradt, Christian B. Mendl
- Abstract summary: Solvable matrix product and projected entangled pair states evolved by dual and ternary-unitary quantum circuits have analytically accessible correlation functions.
We compute the average behavior of a physically motivated two-point equal-time correlation function with respect to random disordered tensor network states.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Solvable matrix product and projected entangled pair states evolved by dual
and ternary-unitary quantum circuits have analytically accessible correlation
functions. Here, we investigate the influence of disorder. Specifically, we
compute the average behavior of a physically motivated two-point equal-time
correlation function with respect to random disordered solvable tensor network
states arising from the Haar measure on the unitary group. By employing the
Weingarten calculus, we provide an exact analytical expression for the average
of the $k$th moment of the correlation function. The complexity of the
expression scales with $k!$ and is independent of the complexity of the
underlying tensor network state. Our result implies that the correlation
function vanishes on average, while its covariance is nonzero.
Related papers
- Dynamical response and time correlation functions in random quantum systems [0.0]
Time-dependent response and correlation functions are studied in random quantum systems composed of infinitely many parts.
The correlation function in individual members of the ensemble are characterised in terms of their probability distribution.
arXiv Detail & Related papers (2024-08-18T09:28:51Z) - Two-time second-order correlation function [0.0]
Derivation of two-time second-order correlation function by following approaches such as differential equation, coherent-state propagator, and quasi-statistical distribution function is presented.
arXiv Detail & Related papers (2024-06-15T07:59:39Z) - TIC-TAC: A Framework for Improved Covariance Estimation in Deep Heteroscedastic Regression [109.69084997173196]
Deepscedastic regression involves jointly optimizing the mean and covariance of the predicted distribution using the negative log-likelihood.
Recent works show that this may result in sub-optimal convergence due to the challenges associated with covariance estimation.
We study two questions: (1) Does the predicted covariance truly capture the randomness of the predicted mean?
Our results show that not only does TIC accurately learn the covariance, it additionally facilitates an improved convergence of the negative log-likelihood.
arXiv Detail & Related papers (2023-10-29T09:54:03Z) - On the renormalization group fixed point of the two-dimensional Ising
model at criticality [77.34726150561087]
We show that a simple, explicit analytic description of a fixed point using operator-algebraic renormalization (OAR) is possible.
Specifically, the fixed point is characterized in terms of spin-spin correlation functions.
arXiv Detail & Related papers (2023-04-06T16:57:28Z) - Typical Correlation Length of Sequentially Generated Tensor Network
States [0.0]
We focus on spins with local interactions, whose correlations are extremely well captured by tensor network states.
We define ensembles of random tensor network states in one and two spatial dimensions that admit a sequential generation.
We observe the consistent emergence of a correlation length that depends only on the underlying spatial dimension and not the considered measure.
arXiv Detail & Related papers (2023-01-11T18:24:45Z) - Data-Driven Influence Functions for Optimization-Based Causal Inference [105.5385525290466]
We study a constructive algorithm that approximates Gateaux derivatives for statistical functionals by finite differencing.
We study the case where probability distributions are not known a priori but need to be estimated from data.
arXiv Detail & Related papers (2022-08-29T16:16:22Z) - Generalization Bounds via Convex Analysis [12.411844611718958]
We show that it is possible to replace the mutual information by any strongly convex function of the joint input-output distribution.
Examples include bounds stated in terms of $p$-norm divergences and the Wasserstein-2 distance.
arXiv Detail & Related papers (2022-02-10T12:30:45Z) - An Indirect Rate-Distortion Characterization for Semantic Sources:
General Model and the Case of Gaussian Observation [83.93224401261068]
Source model is motivated by the recent surge of interest in the semantic aspect of information.
intrinsic state corresponds to the semantic feature of the source, which in general is not observable.
Rate-distortion function is the semantic rate-distortion function of the source.
arXiv Detail & Related papers (2022-01-29T02:14:24Z) - Acceleration in Distributed Optimization Under Similarity [72.54787082152278]
We study distributed (strongly convex) optimization problems over a network of agents, with no centralized nodes.
An $varepsilon$-solution is achieved in $tildemathcalrhoObig(sqrtfracbeta/mu (1-)log1/varepsilonbig)$ number of communications steps.
This rate matches (up to poly-log factors) for the first time lower complexity communication bounds of distributed gossip-algorithms applied to the class of problems of interest.
arXiv Detail & Related papers (2021-10-24T04:03:00Z) - Correlations of quantum curvature and variance of Chern numbers [0.0]
We show that the correlation function diverges as the inverse of the distance at small separations.
We also define and analyse a correlation function of mixed states, showing that it is finite but singular at small separations.
arXiv Detail & Related papers (2020-12-07T18:00:40Z) - Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory [110.99247009159726]
Temporal-difference and Q-learning play a key role in deep reinforcement learning, where they are empowered by expressive nonlinear function approximators such as neural networks.
In particular, temporal-difference learning converges when the function approximator is linear in a feature representation, which is fixed throughout learning, and possibly diverges otherwise.
arXiv Detail & Related papers (2020-06-08T17:25:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.