Uncertainty Principles in Risk-Aware Statistical Estimation
- URL: http://arxiv.org/abs/2104.14283v1
- Date: Thu, 29 Apr 2021 12:06:53 GMT
- Title: Uncertainty Principles in Risk-Aware Statistical Estimation
- Authors: Nikolas P. Koumpis and Dionysios S. Kalogerias
- Abstract summary: We present a new uncertainty principle for risk-aware statistical estimation.
It effectively quantifying the inherent trade-off between mean squared error ($mse$) and risk.
- Score: 4.721069729610892
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new uncertainty principle for risk-aware statistical estimation,
effectively quantifying the inherent trade-off between mean squared error
($\mse$) and risk, the latter measured by the associated average predictive
squared error variance ($\sev$), for every admissible estimator of choice. Our
uncertainty principle has a familiar form and resembles fundamental and
classical results arising in several other areas, such as the Heisenberg
principle in statistical and quantum mechanics, and the Gabor limit (time-scale
trade-offs) in harmonic analysis. In particular, we prove that, provided a
joint generative model of states and observables, the product between $\mse$
and $\sev$ is bounded from below by a computable model-dependent constant,
which is explicitly related to the Pareto frontier of a recently studied
$\sev$-constrained minimum $\mse$ (MMSE) estimation problem. Further, we show
that the aforementioned constant is inherently connected to an intuitive new
and rigorously topologically grounded statistical measure of distribution
skewness in multiple dimensions, consistent with Pearson's moment coefficient
of skewness for variables on the line. Our results are also illustrated via
numerical simulations.
Related papers
- Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - On the Variance, Admissibility, and Stability of Empirical Risk
Minimization [80.26309576810844]
Empirical Risk Minimization (ERM) with squared loss may attain minimax suboptimal error rates.
We show that under mild assumptions, the suboptimality of ERM must be due to large bias rather than variance.
We also show that our estimates imply stability of ERM, complementing the main result of Caponnetto and Rakhlin (2006) for non-Donsker classes.
arXiv Detail & Related papers (2023-05-29T15:25:48Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Excess risk analysis for epistemic uncertainty with application to
variational inference [110.4676591819618]
We present a novel EU analysis in the frequentist setting, where data is generated from an unknown distribution.
We show a relation between the generalization ability and the widely used EU measurements, such as the variance and entropy of the predictive distribution.
We propose new variational inference that directly controls the prediction and EU evaluation performances based on the PAC-Bayesian theory.
arXiv Detail & Related papers (2022-06-02T12:12:24Z) - Probabilistic learning inference of boundary value problem with
uncertainties based on Kullback-Leibler divergence under implicit constraints [0.0]
We present a general methodology of a probabilistic learning inference that allows for estimating a posterior probability model for a boundary value problem from a prior probability model.
A statistical surrogate model of the implicit mapping, which represents the constraints, is introduced.
In a second part, an application is presented to illustrate the proposed theory and is also, as such, a contribution to the three-dimensional homogenization of heterogeneous linear elastic media.
arXiv Detail & Related papers (2022-02-10T16:00:10Z) - On Well-posedness and Minimax Optimal Rates of Nonparametric Q-function
Estimation in Off-policy Evaluation [1.575865518040625]
We study the off-policy evaluation problem in an infinite-horizon Markov decision process with continuous states and actions.
We recast the $Q$-function estimation into a special form of the nonparametric instrumental variables (NPIV) estimation problem.
arXiv Detail & Related papers (2022-01-17T01:09:38Z) - Non asymptotic estimation lower bounds for LTI state space models with
Cram\'er-Rao and van Trees [1.14219428942199]
We study the estimation problem for linear time-invariant (LTI) state-space models with Gaussian excitation of an unknown covariance.
We provide non lower bounds for the expected estimation error and the mean square estimation risk of the least square estimator.
Our results extend and improve existing lower bounds to lower bounds in expectation of the mean square estimation risk.
arXiv Detail & Related papers (2021-09-17T15:00:25Z) - Divergence Frontiers for Generative Models: Sample Complexity,
Quantization Level, and Frontier Integral [58.434753643798224]
Divergence frontiers have been proposed as an evaluation framework for generative models.
We establish non-asymptotic bounds on the sample complexity of the plug-in estimator of divergence frontiers.
We also augment the divergence frontier framework by investigating the statistical performance of smoothed distribution estimators.
arXiv Detail & Related papers (2021-06-15T06:26:25Z) - Distributionally Robust Parametric Maximum Likelihood Estimation [13.09499764232737]
We propose a distributionally robust maximum likelihood estimator that minimizes the worst-case expected log-loss uniformly over a parametric nominal distribution.
Our novel robust estimator also enjoys statistical consistency and delivers promising empirical results in both regression and classification tasks.
arXiv Detail & Related papers (2020-10-11T19:05:49Z) - On lower bounds for the bias-variance trade-off [0.0]
It is a common phenomenon that for high-dimensional statistical models, rate-optimal estimators balance squared bias and variance.
We propose a general strategy to obtain lower bounds on the variance of any estimator with bias smaller than a prespecified bound.
This shows to which extent the bias-variance trade-off is unavoidable and allows to quantify the loss of performance for methods that do not obey it.
arXiv Detail & Related papers (2020-05-30T14:07:43Z) - GenDICE: Generalized Offline Estimation of Stationary Values [108.17309783125398]
We show that effective estimation can still be achieved in important applications.
Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions.
The resulting algorithm, GenDICE, is straightforward and effective.
arXiv Detail & Related papers (2020-02-21T00:27:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.