Distributionally Robust Parametric Maximum Likelihood Estimation
- URL: http://arxiv.org/abs/2010.05321v1
- Date: Sun, 11 Oct 2020 19:05:49 GMT
- Title: Distributionally Robust Parametric Maximum Likelihood Estimation
- Authors: Viet Anh Nguyen and Xuhui Zhang and Jose Blanchet and Angelos
Georghiou
- Abstract summary: We propose a distributionally robust maximum likelihood estimator that minimizes the worst-case expected log-loss uniformly over a parametric nominal distribution.
Our novel robust estimator also enjoys statistical consistency and delivers promising empirical results in both regression and classification tasks.
- Score: 13.09499764232737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the parameter estimation problem of a probabilistic generative
model prescribed using a natural exponential family of distributions. For this
problem, the typical maximum likelihood estimator usually overfits under
limited training sample size, is sensitive to noise and may perform poorly on
downstream predictive tasks. To mitigate these issues, we propose a
distributionally robust maximum likelihood estimator that minimizes the
worst-case expected log-loss uniformly over a parametric Kullback-Leibler ball
around a parametric nominal distribution. Leveraging the analytical expression
of the Kullback-Leibler divergence between two distributions in the same
natural exponential family, we show that the min-max estimation problem is
tractable in a broad setting, including the robust training of generalized
linear models. Our novel robust estimator also enjoys statistical consistency
and delivers promising empirical results in both regression and classification
tasks.
Related papers
- Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Doubly Robust Inference in Causal Latent Factor Models [12.116813197164047]
This article introduces a new estimator of average treatment effects under unobserved confounding in modern data-rich environments featuring large numbers of units and outcomes.
We derive finite-sample weighting and guarantees, and show that the error of the new estimator converges to a mean-zero Gaussian distribution at a parametric rate.
arXiv Detail & Related papers (2024-02-18T17:13:46Z) - Nonparametric logistic regression with deep learning [1.2509746979383698]
In the nonparametric logistic regression, the Kullback-Leibler divergence could diverge easily.
Instead of analyzing the excess risk itself, it suffices to show the consistency of the maximum likelihood estimator.
As an important application, we derive the convergence rates of the NPMLE with deep neural networks.
arXiv Detail & Related papers (2024-01-23T04:31:49Z) - Robust Gaussian Process Regression with Huber Likelihood [2.7184224088243365]
We propose a robust process model in the Gaussian process framework with the likelihood of observed data expressed as the Huber probability distribution.
The proposed model employs weights based on projection statistics to scale residuals and bound the influence of vertical outliers and bad leverage points on the latent functions estimates.
arXiv Detail & Related papers (2023-01-19T02:59:33Z) - Wasserstein Distributionally Robust Estimation in High Dimensions:
Performance Analysis and Optimal Hyperparameter Tuning [0.0]
We propose a Wasserstein distributionally robust estimation framework to estimate an unknown parameter from noisy linear measurements.
We focus on the task of analyzing the squared error performance of such estimators.
We show that the squared error can be recovered as the solution of a convex-concave optimization problem.
arXiv Detail & Related papers (2022-06-27T13:02:59Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z) - SUMO: Unbiased Estimation of Log Marginal Probability for Latent
Variable Models [80.22609163316459]
We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series.
We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost.
arXiv Detail & Related papers (2020-04-01T11:49:30Z) - Minimax Optimal Estimation of KL Divergence for Continuous Distributions [56.29748742084386]
Esting Kullback-Leibler divergence from identical and independently distributed samples is an important problem in various domains.
One simple and effective estimator is based on the k nearest neighbor between these samples.
arXiv Detail & Related papers (2020-02-26T16:37:37Z) - Asymptotic Analysis of Sampling Estimators for Randomized Numerical
Linear Algebra Algorithms [43.134933182911766]
We develop an analysis to derive the distribution of RandNLA sampling estimators for the least-squares problem.
We identify optimal sampling probabilities based on the Asymptotic Mean Squared Error (AMSE) and the Expected Asymptotic Mean Squared Error (EAMSE)
Our theoretical results clarify the role of leverage in the sampling process, and our empirical results demonstrate improvements over existing methods.
arXiv Detail & Related papers (2020-02-24T20:34:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.