Large deviation principle for moment map estimation
- URL: http://arxiv.org/abs/2004.14504v1
- Date: Wed, 29 Apr 2020 22:27:22 GMT
- Title: Large deviation principle for moment map estimation
- Authors: Alonso Botero, Matthias Christandl, P\'eter Vrana
- Abstract summary: We consider a family of positive operator valued measures associated with representations of compact connected Lie groups.
For invertible states we prove that the measures satisfy the large deviation principle with an explicitly given rate function.
- Score: 2.3857747529378917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a family of positive operator valued measures associated with
representations of compact connected Lie groups. For many independent copies of
a single state and a tensor power representation we show that the observed
probability distributions converge to the value of the moment map. For
invertible states we prove that the measures satisfy the large deviation
principle with an explicitly given rate function.
Related papers
- Distributional Matrix Completion via Nearest Neighbors in the Wasserstein Space [8.971989179518216]
Given a sparsely observed matrix of empirical distributions, we seek to impute the true distributions associated with both observed and unobserved matrix entries.
We utilize tools from optimal transport to generalize the nearest neighbors method to the distributional setting.
arXiv Detail & Related papers (2024-10-17T00:50:17Z) - Consistent Estimation of a Class of Distances Between Covariance Matrices [7.291687946822539]
We are interested in the family of distances that can be expressed as sums of traces of functions that are separately applied to each covariance matrix.
A statistical analysis of the behavior of this class of distance estimators has also been conducted.
We present a central limit theorem that establishes the Gaussianity of these estimators and provides closed form expressions for the corresponding means and variances.
arXiv Detail & Related papers (2024-09-18T07:36:25Z) - Conditional Independence of 1D Gibbs States with Applications to Efficient Learning [0.23301643766310368]
We show that spin chains in thermal equilibrium have a correlation structure in which individual regions are strongly correlated at most with their near vicinity.
We prove that these measures decay superexponentially at every positive temperature.
arXiv Detail & Related papers (2024-02-28T17:28:01Z) - Remarks on the quasi-position representation in models of generalized
uncertainty principle [0.0]
This note aims to elucidate certain aspects of the quasi-position representation frequently used in the investigation of one-dimensional models.
We focus on two key points: (i) Contrary to recent claims, the quasi-position operator can possess physical significance even though it is non-Hermitian, and (ii) in the quasi-position representation, operators associated with the position behave as a derivative operator on the quasi-position coordinate.
arXiv Detail & Related papers (2023-06-20T11:46:56Z) - On constructing informationally complete covariant positive
operator-valued measures [0.0]
We study positive operator-valued measures generated by orbits of projective unitary representations of locally compact Abelian groups.
It is shown that integration over such a measure defines a family of contractions being multiples of unitary operators from the representation.
arXiv Detail & Related papers (2023-01-29T16:57:56Z) - Statistical Efficiency of Score Matching: The View from Isoperimetry [96.65637602827942]
We show a tight connection between statistical efficiency of score matching and the isoperimetric properties of the distribution being estimated.
We formalize these results both in the sample regime and in the finite regime.
arXiv Detail & Related papers (2022-10-03T06:09:01Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - Localisation in quasiperiodic chains: a theory based on convergence of
local propagators [68.8204255655161]
We present a theory of localisation in quasiperiodic chains with nearest-neighbour hoppings, based on the convergence of local propagators.
Analysing the convergence of these continued fractions, localisation or its absence can be determined, yielding in turn the critical points and mobility edges.
Results are exemplified by analysing the theory for three quasiperiodic models covering a range of behaviour.
arXiv Detail & Related papers (2021-02-18T16:19:52Z) - $\gamma$-ABC: Outlier-Robust Approximate Bayesian Computation Based on a
Robust Divergence Estimator [95.71091446753414]
We propose to use a nearest-neighbor-based $gamma$-divergence estimator as a data discrepancy measure.
Our method achieves significantly higher robustness than existing discrepancy measures.
arXiv Detail & Related papers (2020-06-13T06:09:27Z) - Profile Entropy: A Fundamental Measure for the Learnability and
Compressibility of Discrete Distributions [63.60499266361255]
We show that for samples of discrete distributions, profile entropy is a fundamental measure unifying the concepts of estimation, inference, and compression.
Specifically, profile entropy a) determines the speed of estimating the distribution relative to the best natural estimator; b) characterizes the rate of inferring all symmetric properties compared with the best estimator over any label-invariant distribution collection; c) serves as the limit of profile compression.
arXiv Detail & Related papers (2020-02-26T17:49:04Z) - Minimax Optimal Estimation of KL Divergence for Continuous Distributions [56.29748742084386]
Esting Kullback-Leibler divergence from identical and independently distributed samples is an important problem in various domains.
One simple and effective estimator is based on the k nearest neighbor between these samples.
arXiv Detail & Related papers (2020-02-26T16:37:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.