Non-Asymptotic Bounds for Adversarial Excess Risk under Misspecified
Models
- URL: http://arxiv.org/abs/2309.00771v1
- Date: Sat, 2 Sep 2023 00:51:19 GMT
- Title: Non-Asymptotic Bounds for Adversarial Excess Risk under Misspecified
Models
- Authors: Changyu Liu, Yuling Jiao, Junhui Wang, and Jian Huang
- Abstract summary: We show that adversarial risk is equivalent to the risk induced by a distributional adversarial attack under certain smoothness conditions.
To evaluate the generalization performance of the adversarial estimator, we study the adversarial excess risk.
- Score: 9.65010022854885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a general approach to evaluating the performance of robust
estimators based on adversarial losses under misspecified models. We first show
that adversarial risk is equivalent to the risk induced by a distributional
adversarial attack under certain smoothness conditions. This ensures that the
adversarial training procedure is well-defined. To evaluate the generalization
performance of the adversarial estimator, we study the adversarial excess risk.
Our proposed analysis method includes investigations on both generalization
error and approximation error. We then establish non-asymptotic upper bounds
for the adversarial excess risk associated with Lipschitz loss functions. In
addition, we apply our general results to adversarial training for
classification and regression problems. For the quadratic loss in nonparametric
regression, we show that the adversarial excess risk bound can be improved over
those for a general loss.
Related papers
- Data-driven decision-making under uncertainty with entropic risk measure [5.407319151576265]
The entropic risk measure is widely used in high-stakes decision making to account for tail risks associated with an uncertain loss.
To debias the empirical entropic risk estimator, we propose a strongly consistent bootstrapping procedure.
We show that cross validation methods can result in significantly higher out-of-sample risk for the insurer if the bias in validation performance is not corrected for.
arXiv Detail & Related papers (2024-09-30T04:02:52Z) - Error Bounds of Supervised Classification from Information-Theoretic Perspective [0.0]
We explore bounds on the expected risk when using deep neural networks for supervised classification from an information theoretic perspective.
We introduce model risk and fitting error, which are derived from further decomposing the empirical risk.
arXiv Detail & Related papers (2024-06-07T01:07:35Z) - Predictive Uncertainty Quantification via Risk Decompositions for Strictly Proper Scoring Rules [7.0549244915538765]
Uncertainty in predictive modeling often relies on ad hoc methods.
This paper introduces a theoretical approach to understanding uncertainty through statistical risks.
We show how to split pointwise risk into Bayes risk and excess risk.
arXiv Detail & Related papers (2024-02-16T14:40:22Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Domain Generalization without Excess Empirical Risk [83.26052467843725]
A common approach is designing a data-driven surrogate penalty to capture generalization and minimize the empirical risk jointly with the penalty.
We argue that a significant failure mode of this recipe is an excess risk due to an erroneous penalty or hardness in joint optimization.
We present an approach that eliminates this problem. Instead of jointly minimizing empirical risk with the penalty, we minimize the penalty under the constraint of optimality of the empirical risk.
arXiv Detail & Related papers (2023-08-30T08:46:46Z) - On the Importance of Gradient Norm in PAC-Bayesian Bounds [92.82627080794491]
We propose a new generalization bound that exploits the contractivity of the log-Sobolev inequalities.
We empirically analyze the effect of this new loss-gradient norm term on different neural architectures.
arXiv Detail & Related papers (2022-10-12T12:49:20Z) - Mitigating multiple descents: A model-agnostic framework for risk
monotonization [84.6382406922369]
We develop a general framework for risk monotonization based on cross-validation.
We propose two data-driven methodologies, namely zero- and one-step, that are akin to bagging and boosting.
arXiv Detail & Related papers (2022-05-25T17:41:40Z) - A Manifold View of Adversarial Risk [23.011667845523267]
We investigate two new types of adversarial risk, the normal adversarial risk due to perturbation along normal direction, and the in-manifold adversarial risk due to perturbation within the manifold.
We show with a surprisingly pessimistic case that the standard adversarial risk can be nonzero even when both normal and in-manifold risks are zero.
arXiv Detail & Related papers (2022-03-24T18:11:21Z) - A Full Characterization of Excess Risk via Empirical Risk Landscape [8.797852602680445]
In this paper, we provide a unified analysis of the risk of the model trained by a proper algorithm with both smooth convex and non- loss functions.
arXiv Detail & Related papers (2020-12-04T08:24:50Z) - Learning Bounds for Risk-sensitive Learning [86.50262971918276]
In risk-sensitive learning, one aims to find a hypothesis that minimizes a risk-averse (or risk-seeking) measure of loss.
We study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents.
arXiv Detail & Related papers (2020-06-15T05:25:02Z) - Orthogonal Statistical Learning [49.55515683387805]
We provide non-asymptotic excess risk guarantees for statistical learning in a setting where the population risk depends on an unknown nuisance parameter.
We show that if the population risk satisfies a condition called Neymanity, the impact of the nuisance estimation error on the excess risk bound achieved by the meta-algorithm is of second order.
arXiv Detail & Related papers (2019-01-25T02:21:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.