On the existence of unbiased resilient estimators in discrete quantum
systems
- URL: http://arxiv.org/abs/2402.15242v2
- Date: Fri, 1 Mar 2024 08:33:14 GMT
- Title: On the existence of unbiased resilient estimators in discrete quantum
systems
- Authors: Javier Navarro, Ricard Ravell Rodr\'iguez, and Mikel Sanz
- Abstract summary: We compare the performance of Cram'er-Rao and Bhattacharyya bounds when faced with less-than-ideal prior knowledge of the parameter.
For a given system dimension, one can construct estimators in quantum systems that exhibit increased robustness to prior ignorance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cram\'er-Rao constitutes a crucial lower bound for the mean squared error of
an estimator in frequentist parameter estimation, albeit paradoxically
demanding highly accurate prior knowledge of the parameter to be estimated.
Indeed, this information is needed to construct the optimal unbiased estimator,
which is highly dependent on the parameter. Conversely, Bhattacharyya bounds
result in a more resilient estimation about prior accuracy by imposing
additional constraints on the estimator. Initially, we conduct a quantitative
comparison of the performance between Cram\'er-Rao and Bhattacharyya bounds
when faced with less-than-ideal prior knowledge of the parameter. Furthermore,
we demonstrate that the $n^{th}$order classical and quantum Bhattacharyya
bounds cannot be computed -- given the absence of estimators satisfying the
constraints -- under specific conditions tied to the dimension $m$ of the
discrete system. Intriguingly, for a system with the same dimension $m$, the
maximum non-trivial order $n$ is $m-1$ in the classical case, while in the
quantum realm, it extends to $m(m+1)/2-1$. Consequently, for a given system
dimension, one can construct estimators in quantum systems that exhibit
increased robustness to prior ignorance.
Related papers
- In-Context Parametric Inference: Point or Distribution Estimators? [66.22308335324239]
We show that amortized point estimators generally outperform posterior inference, though the latter remain competitive in some low-dimensional problems.
Our experiments indicate that amortized point estimators generally outperform posterior inference, though the latter remain competitive in some low-dimensional problems.
arXiv Detail & Related papers (2025-02-17T10:00:24Z) - Comparison of estimation limits for quantum two-parameter estimation [1.8507567676996612]
We compare the attainability of the Nagaoka Cram'er--Rao bound and the Lu--Wang uncertainty relation.
We show that these two limits can provide different information about the physically attainable precision.
arXiv Detail & Related papers (2024-07-17T10:37:08Z) - Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Fundamental bounds for parameter estimation with few measurements [0.0]
We discuss different linear (Barankin-like) conditions that can be imposed on estimators and analyze when these conditions admit an optimal estimator with finite variance.
We show that, if the number of imposed conditions is larger than the number of measurement outcomes, there generally does not exist a corresponding estimator with finite variance.
We derive an extended Cram'er-Rao bound that is compatible with a finite variance in situations where the Barankin bound is undefined.
arXiv Detail & Related papers (2024-02-22T12:40:08Z) - Intrinsic Bayesian Cramér-Rao Bound with an Application to Covariance Matrix Estimation [49.67011673289242]
This paper presents a new performance bound for estimation problems where the parameter to estimate lies in a smooth manifold.
It induces a geometry for the parameter manifold, as well as an intrinsic notion of the estimation error measure.
arXiv Detail & Related papers (2023-11-08T15:17:13Z) - Evaluating the quantum optimal biased bound in a unitary evolution
process [12.995137315679923]
We introduce two effective error bounds for biased estimators based on a unitary evolution process.
We show their estimation performance by two specific examples of the unitary evolution process.
arXiv Detail & Related papers (2023-09-09T02:15:37Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Divergence Frontiers for Generative Models: Sample Complexity,
Quantization Level, and Frontier Integral [58.434753643798224]
Divergence frontiers have been proposed as an evaluation framework for generative models.
We establish non-asymptotic bounds on the sample complexity of the plug-in estimator of divergence frontiers.
We also augment the divergence frontier framework by investigating the statistical performance of smoothed distribution estimators.
arXiv Detail & Related papers (2021-06-15T06:26:25Z) - Nonparametric Score Estimators [49.42469547970041]
Estimating the score from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models.
We provide a unifying view of these estimators under the framework of regularized nonparametric regression.
We propose score estimators based on iterative regularization that enjoy computational benefits from curl-free kernels and fast convergence.
arXiv Detail & Related papers (2020-05-20T15:01:03Z) - Distributional robustness of K-class estimators and the PULSE [4.56877715768796]
We prove that the classical K-class estimator satisfies such optimality by establishing a connection between K-class estimators and anchor regression.
We show that it can be computed efficiently as a data-driven simulation K-class estimator.
There are several settings including weak instrument settings, where it outperforms other estimators.
arXiv Detail & Related papers (2020-05-07T09:39:07Z) - On Low-rank Trace Regression under General Sampling Distribution [9.699586426043885]
We show that cross-validated estimators satisfy near-optimal error bounds on general assumptions.
We also show that the cross-validated estimator outperforms the theory-inspired approach of selecting the parameter.
arXiv Detail & Related papers (2019-04-18T02:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.