Robust W-GAN-Based Estimation Under Wasserstein Contamination
- URL: http://arxiv.org/abs/2101.07969v1
- Date: Wed, 20 Jan 2021 05:15:16 GMT
- Title: Robust W-GAN-Based Estimation Under Wasserstein Contamination
- Authors: Zheng Liu, Po-Ling Loh
- Abstract summary: We study several estimation problems under a Wasserstein contamination model and present computationally tractable estimators motivated by generative networks (GANs)
Specifically, we analyze properties of Wasserstein GAN-based estimators for adversarial location estimation, covariance matrix estimation, and linear regression.
Our proposed estimators are minimax optimal in many scenarios.
- Score: 8.87135311567798
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robust estimation is an important problem in statistics which aims at
providing a reasonable estimator when the data-generating distribution lies
within an appropriately defined ball around an uncontaminated distribution.
Although minimax rates of estimation have been established in recent years,
many existing robust estimators with provably optimal convergence rates are
also computationally intractable. In this paper, we study several estimation
problems under a Wasserstein contamination model and present computationally
tractable estimators motivated by generative adversarial networks (GANs).
Specifically, we analyze properties of Wasserstein GAN-based estimators for
location estimation, covariance matrix estimation, and linear regression and
show that our proposed estimators are minimax optimal in many scenarios.
Finally, we present numerical results which demonstrate the effectiveness of
our estimators.
Related papers
- Doubly Robust Inference in Causal Latent Factor Models [12.116813197164047]
This article introduces a new estimator of average treatment effects under unobserved confounding in modern data-rich environments featuring large numbers of units and outcomes.
We derive finite-sample weighting and guarantees, and show that the error of the new estimator converges to a mean-zero Gaussian distribution at a parametric rate.
arXiv Detail & Related papers (2024-02-18T17:13:46Z) - Statistical Barriers to Affine-equivariant Estimation [10.077727846124633]
We investigate the quantitative performance of affine-equivariant estimators for robust mean estimation.
We find that classical estimators are either quantitatively sub-optimal or lack any quantitative guarantees.
We construct a new affine-equivariant estimator which nearly matches our lower bound.
arXiv Detail & Related papers (2023-10-16T18:42:00Z) - Wasserstein Distributionally Robust Estimation in High Dimensions:
Performance Analysis and Optimal Hyperparameter Tuning [0.0]
We propose a Wasserstein distributionally robust estimation framework to estimate an unknown parameter from noisy linear measurements.
We focus on the task of analyzing the squared error performance of such estimators.
We show that the squared error can be recovered as the solution of a convex-concave optimization problem.
arXiv Detail & Related papers (2022-06-27T13:02:59Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - Distributionally Robust Parametric Maximum Likelihood Estimation [13.09499764232737]
We propose a distributionally robust maximum likelihood estimator that minimizes the worst-case expected log-loss uniformly over a parametric nominal distribution.
Our novel robust estimator also enjoys statistical consistency and delivers promising empirical results in both regression and classification tasks.
arXiv Detail & Related papers (2020-10-11T19:05:49Z) - Learning Minimax Estimators via Online Learning [55.92459567732491]
We consider the problem of designing minimax estimators for estimating parameters of a probability distribution.
We construct an algorithm for finding a mixed-case Nash equilibrium.
arXiv Detail & Related papers (2020-06-19T22:49:42Z) - Nonparametric Score Estimators [49.42469547970041]
Estimating the score from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models.
We provide a unifying view of these estimators under the framework of regularized nonparametric regression.
We propose score estimators based on iterative regularization that enjoy computational benefits from curl-free kernels and fast convergence.
arXiv Detail & Related papers (2020-05-20T15:01:03Z) - Nonparametric Estimation of the Fisher Information and Its Applications [82.00720226775964]
This paper considers the problem of estimation of the Fisher information for location from a random sample of size $n$.
An estimator proposed by Bhattacharya is revisited and improved convergence rates are derived.
A new estimator, termed a clipped estimator, is proposed.
arXiv Detail & Related papers (2020-05-07T17:21:56Z) - Distributional robustness of K-class estimators and the PULSE [4.56877715768796]
We prove that the classical K-class estimator satisfies such optimality by establishing a connection between K-class estimators and anchor regression.
We show that it can be computed efficiently as a data-driven simulation K-class estimator.
There are several settings including weak instrument settings, where it outperforms other estimators.
arXiv Detail & Related papers (2020-05-07T09:39:07Z) - Minimax Optimal Estimation of KL Divergence for Continuous Distributions [56.29748742084386]
Esting Kullback-Leibler divergence from identical and independently distributed samples is an important problem in various domains.
One simple and effective estimator is based on the k nearest neighbor between these samples.
arXiv Detail & Related papers (2020-02-26T16:37:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.