Asymptotic normality of robust risk minimizers
- URL: http://arxiv.org/abs/2004.02328v4
- Date: Tue, 30 May 2023 19:32:14 GMT
- Title: Asymptotic normality of robust risk minimizers
- Authors: Stanislav Minsker
- Abstract summary: This paper investigates properties of algorithms that can be viewed as robust analogues of the classical empirical risk.
We show that for a wide class of parametric problems, minimizers of the appropriately defined robust proxy of risk converge to the minimizers of the true risk at the same rate.
- Score: 2.0432586732993374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates asymptotic properties of algorithms that can be
viewed as robust analogues of the classical empirical risk minimization. These
strategies are based on replacing the usual empirical average by a robust proxy
of the mean, such as the (version of) the median of means estimator. It is well
known by now that the excess risk of resulting estimators often converges to
zero at optimal rates under much weaker assumptions than those required by
their ``classical'' counterparts. However, less is known about the asymptotic
properties of the estimators themselves, for instance, whether robust analogues
of the maximum likelihood estimators are asymptotically efficient. We make a
step towards answering these questions and show that for a wide class of
parametric problems, minimizers of the appropriately defined robust proxy of
the risk converge to the minimizers of the true risk at the same rate, and
often have the same asymptotic variance, as the estimators obtained by
minimizing the usual empirical risk.
Related papers
- Data-driven decision-making under uncertainty with entropic risk measure [5.407319151576265]
The entropic risk measure is widely used in high-stakes decision making to account for tail risks associated with an uncertain loss.
To debias the empirical entropic risk estimator, we propose a strongly consistent bootstrapping procedure.
We show that cross validation methods can result in significantly higher out-of-sample risk for the insurer if the bias in validation performance is not corrected for.
arXiv Detail & Related papers (2024-09-30T04:02:52Z) - Empirical Risk Minimization for Losses without Variance [26.30435936379624]
This paper considers an empirical risk problem under heavy-tailed settings, where data does not have finite variance, but only has $p$-th moment with $p in (1,2)$.
Instead of using estimation procedure based on truncated observed data, we choose the minimization by minimizing the risk value.
Those risk values can be robustly estimated via using the remarkable Catoni's method (Catoni, 2012).
arXiv Detail & Related papers (2023-09-07T16:14:00Z) - Domain Generalization without Excess Empirical Risk [83.26052467843725]
A common approach is designing a data-driven surrogate penalty to capture generalization and minimize the empirical risk jointly with the penalty.
We argue that a significant failure mode of this recipe is an excess risk due to an erroneous penalty or hardness in joint optimization.
We present an approach that eliminates this problem. Instead of jointly minimizing empirical risk with the penalty, we minimize the penalty under the constraint of optimality of the empirical risk.
arXiv Detail & Related papers (2023-08-30T08:46:46Z) - On the Importance of Gradient Norm in PAC-Bayesian Bounds [92.82627080794491]
We propose a new generalization bound that exploits the contractivity of the log-Sobolev inequalities.
We empirically analyze the effect of this new loss-gradient norm term on different neural architectures.
arXiv Detail & Related papers (2022-10-12T12:49:20Z) - Mitigating multiple descents: A model-agnostic framework for risk
monotonization [84.6382406922369]
We develop a general framework for risk monotonization based on cross-validation.
We propose two data-driven methodologies, namely zero- and one-step, that are akin to bagging and boosting.
arXiv Detail & Related papers (2022-05-25T17:41:40Z) - Empirical Risk Minimization with Relative Entropy Regularization:
Optimality and Sensitivity Analysis [7.953455469099826]
The sensitivity of the expected empirical risk to deviations from the solution of the ERM-RER problem is studied.
The expectation of the sensitivity is upper bounded, up to a constant factor, by the square root of the lautum information between the models and the datasets.
arXiv Detail & Related papers (2022-02-09T10:55:14Z) - Optimal variance-reduced stochastic approximation in Banach spaces [114.8734960258221]
We study the problem of estimating the fixed point of a contractive operator defined on a separable Banach space.
We establish non-asymptotic bounds for both the operator defect and the estimation error.
arXiv Detail & Related papers (2022-01-21T02:46:57Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Minimax Off-Policy Evaluation for Multi-Armed Bandits [58.7013651350436]
We study the problem of off-policy evaluation in the multi-armed bandit model with bounded rewards.
We develop minimax rate-optimal procedures under three settings.
arXiv Detail & Related papers (2021-01-19T18:55:29Z) - Risk of the Least Squares Minimum Norm Estimator under the Spike
Covariance Model [0.0]
We study risk of the minimum norm linear least squares estimator in when the number of parameters $d$ depends on $n$, and $fracdn rightarrow infty$.
We show that in this setting risk of minimum norm least squares estimator vanishes in compare to risk of the null estimator.
arXiv Detail & Related papers (2019-12-31T16:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.