Learning finitely correlated states: stability of the spectral reconstruction
- URL: http://arxiv.org/abs/2312.07516v2
- Date: Thu, 2 May 2024 17:20:02 GMT
- Title: Learning finitely correlated states: stability of the spectral reconstruction
- Authors: Marco Fanizza, Niklas Galke, Josep Lumbreras, Cambyse Rouzé, Andreas Winter,
- Abstract summary: We show that marginals of blocks of $t$ systems of any finitely correlated translation invariant state on a chain can be learned, in trace distance, with $O(t2)$ copies.
The algorithm requires only the estimation of a marginal of a controlled size, in the worst case bounded by the minimum bond dimension.
- Score: 1.9573380763700716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We show that marginals of blocks of $t$ systems of any finitely correlated translation invariant state on a chain can be learned, in trace distance, with $O(t^2)$ copies -- with an explicit dependence on local dimension, memory dimension and spectral properties of a certain map constructed from the state -- and computational complexity polynomial in $t$. The algorithm requires only the estimation of a marginal of a controlled size, in the worst case bounded by the minimum bond dimension, from which it reconstructs a translation invariant matrix product operator. In the analysis, a central role is played by the theory of operator systems. A refined error bound can be proven for $C^*$-finitely correlated states, which have an operational interpretation in terms of sequential quantum channels applied to the memory system. We can also obtain an analogous error bound for a class of matrix product density operators reconstructible by local marginals. In this case, a linear number of marginals must be estimated, obtaining a sample complexity of $\tilde{O}(t^3)$. The learning algorithm also works for states that are only close to a finitely correlated state, with the potential of providing competitive algorithms for other interesting families of states.
Related papers
- Enforced Gaplessness from States with Exponentially Decaying Correlations [0.0]
We show that even certain exponentially decaying correlations can imply gaplessness.
Our findings have implications for identifying the subset of Hilbert space to which gapped ground states belong.
arXiv Detail & Related papers (2025-03-03T19:00:37Z) - Non-convex matrix sensing: Breaking the quadratic rank barrier in the sample complexity [11.412228884390784]
We study the problem of reconstructing a low-rank quadratic convex matrix from a few measurements.
We show that factorized gradient with spectral specificity converges to truth with the number of samples.
arXiv Detail & Related papers (2024-08-20T14:09:28Z) - Efficient conversion from fermionic Gaussian states to matrix product states [48.225436651971805]
We propose a highly efficient algorithm that converts fermionic Gaussian states to matrix product states.
It can be formulated for finite-size systems without translation invariance, but becomes particularly appealing when applied to infinite systems.
The potential of our method is demonstrated by numerical calculations in two chiral spin liquids.
arXiv Detail & Related papers (2024-08-02T10:15:26Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Transfer Operators from Batches of Unpaired Points via Entropic
Transport Kernels [3.099885205621181]
We derive a maximum-likelihood inference functional, propose a computationally tractable approximation and analyze their properties.
We prove a $Gamma$-convergence result showing that we can recover the true density from empirical approximations as the number $N$ of blocks goes to infinity.
arXiv Detail & Related papers (2024-02-13T12:52:41Z) - Classifying fermionic states via many-body correlation measures [0.0]
We make progress in establishing the link between fermionic correlations and efficient computational physics methods.
We find a rigorous classification of states relative to $k$-fermion correlations, which admits a computational physics interpretation.
arXiv Detail & Related papers (2023-09-14T18:00:02Z) - Nonlocality under Computational Assumptions [51.020610614131186]
A set of correlations is said to be nonlocal if it cannot be reproduced by spacelike-separated parties sharing randomness and performing local operations.
We show that there exist (efficient) local producing measurements that cannot be reproduced through randomness and quantum-time computation.
arXiv Detail & Related papers (2023-03-03T16:53:30Z) - Linearized Wasserstein dimensionality reduction with approximation
guarantees [65.16758672591365]
LOT Wassmap is a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space.
We show that LOT Wassmap attains correct embeddings and that the quality improves with increased sample size.
We also show how LOT Wassmap significantly reduces the computational cost when compared to algorithms that depend on pairwise distance computations.
arXiv Detail & Related papers (2023-02-14T22:12:16Z) - Learning Transition Operators From Sparse Space-Time Samples [11.859913430860335]
We consider the nonlinear problem of learning a transition operator $mathbfA$ from partial observations at different times.
We show that no more than $mathcalOrn log(nT)$ space-time samples are sufficient to ensure accurate recovery of a rank-$r$ operator.
arXiv Detail & Related papers (2022-12-01T18:33:59Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Statistically Meaningful Approximation: a Case Study on Approximating
Turing Machines with Transformers [50.85524803885483]
This work proposes a formal definition of statistically meaningful (SM) approximation which requires the approximating network to exhibit good statistical learnability.
We study SM approximation for two function classes: circuits and Turing machines.
arXiv Detail & Related papers (2021-07-28T04:28:55Z) - Free versus Bound Entanglement: Machine learning tackling a NP-hard
problem [0.06091702876917279]
Entanglement detection in high dimensional systems is a NP-hard problem since it is lacking an efficient way.
We find a family of magically symmetric states of bipartite qutrits for which we find $82%$ to be free entangled, $2%$ to be certainly separable and as much as $10%$ to be bound entangled.
arXiv Detail & Related papers (2021-06-07T21:38:39Z) - An $L^2$ Analysis of Reinforcement Learning in High Dimensions with
Kernel and Neural Network Approximation [9.088303226909277]
This paper considers the situation where the function approximation is made using the kernel method or the two-layer neural network model.
We establish an $tildeO(H3|mathcal A|frac14n-frac14)$ bound for the optimal policy with $Hn$ samples.
Even though this result still requires a finite-sized action space, the error bound is independent of the dimensionality of the state space.
arXiv Detail & Related papers (2021-04-15T21:59:03Z) - Improved rates for prediction and identification of partially observed
linear dynamical systems [4.68299658663016]
Identification of a linear time-in dynamical system from partial observations is a fundamental problem in control theory.
We propose an algorithm that learns such systems with non-asymptotic statistical rates depending on the inherentity $d$ of the system.
Our algorithm is based on multi-scale low-rank approximation SVD applied to Hankel matrices of increasing sizes.
arXiv Detail & Related papers (2020-11-19T18:04:18Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z) - Robustly Learning any Clusterable Mixture of Gaussians [55.41573600814391]
We study the efficient learnability of high-dimensional Gaussian mixtures in the adversarial-robust setting.
We provide an algorithm that learns the components of an $epsilon$-corrupted $k$-mixture within information theoretically near-optimal error proofs of $tildeO(epsilon)$.
Our main technical contribution is a new robust identifiability proof clusters from a Gaussian mixture, which can be captured by the constant-degree Sum of Squares proof system.
arXiv Detail & Related papers (2020-05-13T16:44:12Z) - Quantum Algorithms for Simulating the Lattice Schwinger Model [63.18141027763459]
We give scalable, explicit digital quantum algorithms to simulate the lattice Schwinger model in both NISQ and fault-tolerant settings.
In lattice units, we find a Schwinger model on $N/2$ physical sites with coupling constant $x-1/2$ and electric field cutoff $x-1/2Lambda$.
We estimate observables which we cost in both the NISQ and fault-tolerant settings by assuming a simple target observable---the mean pair density.
arXiv Detail & Related papers (2020-02-25T19:18:36Z) - Multiple Metric Learning for Structured Data [0.0]
We address the problem of merging graph and feature-space information while learning a metric from structured data.
We propose a new graph-based technique for optimizing under metric constraints.
arXiv Detail & Related papers (2020-02-13T19:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.