On the existence of unbiased resilient estimators in discrete quantum systems
- URL: http://arxiv.org/abs/2402.15242v3
- Date: Mon, 20 Jan 2025 19:18:22 GMT
- Title: On the existence of unbiased resilient estimators in discrete quantum systems
- Authors: Javier Navarro, Ricard Ravell Rodríguez, Mikel Sanz,
- Abstract summary: Bhattacharyya bounds offer a more robust estimation framework with respect to prior accuracy.
We show that when the number of constraints exceeds the number of measurement outcomes, an estimator with finite variance typically does not exist.
- Score: 0.0
- License:
- Abstract: The Cram\'er-Rao bound serves as a crucial lower limit for the mean squared error of an estimator in frequentist parameter estimation. Paradoxically, it requires highly accurate prior knowledge of the estimated parameter for constructing the optimal unbiased estimator. In contrast, Bhattacharyya bounds offer a more robust estimation framework with respect to prior accuracy by introducing additional constraints on the estimator. In this work, we examine divergences that arise in the computation of these bounds and establish the conditions under which they remain valid. Notably, we show that when the number of constraints exceeds the number of measurement outcomes, an estimator with finite variance typically does not exist. Furthermore, we systematically investigate the properties of these bounds using paradigmatic examples, comparing them to the Cram\'er-Rao and Bayesian approaches.
Related papers
- In-Context Parametric Inference: Point or Distribution Estimators? [66.22308335324239]
We show that amortized point estimators generally outperform posterior inference, though the latter remain competitive in some low-dimensional problems.
Our experiments indicate that amortized point estimators generally outperform posterior inference, though the latter remain competitive in some low-dimensional problems.
arXiv Detail & Related papers (2025-02-17T10:00:24Z) - Comparison of estimation limits for quantum two-parameter estimation [1.8507567676996612]
We compare the attainability of the Nagaoka Cram'er--Rao bound and the Lu--Wang uncertainty relation.
We show that these two limits can provide different information about the physically attainable precision.
arXiv Detail & Related papers (2024-07-17T10:37:08Z) - Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Fundamental bounds for parameter estimation with few measurements [0.0]
We discuss different linear (Barankin-like) conditions that can be imposed on estimators and analyze when these conditions admit an optimal estimator with finite variance.
We show that, if the number of imposed conditions is larger than the number of measurement outcomes, there generally does not exist a corresponding estimator with finite variance.
We derive an extended Cram'er-Rao bound that is compatible with a finite variance in situations where the Barankin bound is undefined.
arXiv Detail & Related papers (2024-02-22T12:40:08Z) - Intrinsic Bayesian Cramér-Rao Bound with an Application to Covariance Matrix Estimation [49.67011673289242]
This paper presents a new performance bound for estimation problems where the parameter to estimate lies in a smooth manifold.
It induces a geometry for the parameter manifold, as well as an intrinsic notion of the estimation error measure.
arXiv Detail & Related papers (2023-11-08T15:17:13Z) - Evaluating the quantum optimal biased bound in a unitary evolution
process [12.995137315679923]
We introduce two effective error bounds for biased estimators based on a unitary evolution process.
We show their estimation performance by two specific examples of the unitary evolution process.
arXiv Detail & Related papers (2023-09-09T02:15:37Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Divergence Frontiers for Generative Models: Sample Complexity,
Quantization Level, and Frontier Integral [58.434753643798224]
Divergence frontiers have been proposed as an evaluation framework for generative models.
We establish non-asymptotic bounds on the sample complexity of the plug-in estimator of divergence frontiers.
We also augment the divergence frontier framework by investigating the statistical performance of smoothed distribution estimators.
arXiv Detail & Related papers (2021-06-15T06:26:25Z) - Nonparametric Score Estimators [49.42469547970041]
Estimating the score from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models.
We provide a unifying view of these estimators under the framework of regularized nonparametric regression.
We propose score estimators based on iterative regularization that enjoy computational benefits from curl-free kernels and fast convergence.
arXiv Detail & Related papers (2020-05-20T15:01:03Z) - Distributional robustness of K-class estimators and the PULSE [4.56877715768796]
We prove that the classical K-class estimator satisfies such optimality by establishing a connection between K-class estimators and anchor regression.
We show that it can be computed efficiently as a data-driven simulation K-class estimator.
There are several settings including weak instrument settings, where it outperforms other estimators.
arXiv Detail & Related papers (2020-05-07T09:39:07Z) - On Low-rank Trace Regression under General Sampling Distribution [9.699586426043885]
We show that cross-validated estimators satisfy near-optimal error bounds on general assumptions.
We also show that the cross-validated estimator outperforms the theory-inspired approach of selecting the parameter.
arXiv Detail & Related papers (2019-04-18T02:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.