A Manifold View of Adversarial Risk
- URL: http://arxiv.org/abs/2203.13277v1
- Date: Thu, 24 Mar 2022 18:11:21 GMT
- Title: A Manifold View of Adversarial Risk
- Authors: Wenjia Zhang, Yikai Zhang, Xiaolin Hu, Mayank Goswami, Chao Chen,
Dimitris Metaxas
- Abstract summary: We investigate two new types of adversarial risk, the normal adversarial risk due to perturbation along normal direction, and the in-manifold adversarial risk due to perturbation within the manifold.
We show with a surprisingly pessimistic case that the standard adversarial risk can be nonzero even when both normal and in-manifold risks are zero.
- Score: 23.011667845523267
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The adversarial risk of a machine learning model has been widely studied.
Most previous works assume that the data lies in the whole ambient space. We
propose to take a new angle and take the manifold assumption into
consideration. Assuming data lies in a manifold, we investigate two new types
of adversarial risk, the normal adversarial risk due to perturbation along
normal direction, and the in-manifold adversarial risk due to perturbation
within the manifold. We prove that the classic adversarial risk can be bounded
from both sides using the normal and in-manifold adversarial risks. We also
show with a surprisingly pessimistic case that the standard adversarial risk
can be nonzero even when both normal and in-manifold risks are zero. We
finalize the paper with empirical studies supporting our theoretical results.
Our results suggest the possibility of improving the robustness of a classifier
by only focusing on the normal adversarial risk.
Related papers
- Data-driven decision-making under uncertainty with entropic risk measure [5.407319151576265]
The entropic risk measure is widely used in high-stakes decision making to account for tail risks associated with an uncertain loss.
To debias the empirical entropic risk estimator, we propose a strongly consistent bootstrapping procedure.
We show that cross validation methods can result in significantly higher out-of-sample risk for the insurer if the bias in validation performance is not corrected for.
arXiv Detail & Related papers (2024-09-30T04:02:52Z) - Data-Adaptive Tradeoffs among Multiple Risks in Distribution-Free Prediction [55.77015419028725]
We develop methods that permit valid control of risk when threshold and tradeoff parameters are chosen adaptively.
Our methodology supports monotone and nearly-monotone risks, but otherwise makes no distributional assumptions.
arXiv Detail & Related papers (2024-03-28T17:28:06Z) - Non-Asymptotic Bounds for Adversarial Excess Risk under Misspecified
Models [9.65010022854885]
We show that adversarial risk is equivalent to the risk induced by a distributional adversarial attack under certain smoothness conditions.
To evaluate the generalization performance of the adversarial estimator, we study the adversarial excess risk.
arXiv Detail & Related papers (2023-09-02T00:51:19Z) - Domain Generalization without Excess Empirical Risk [83.26052467843725]
A common approach is designing a data-driven surrogate penalty to capture generalization and minimize the empirical risk jointly with the penalty.
We argue that a significant failure mode of this recipe is an excess risk due to an erroneous penalty or hardness in joint optimization.
We present an approach that eliminates this problem. Instead of jointly minimizing empirical risk with the penalty, we minimize the penalty under the constraint of optimality of the empirical risk.
arXiv Detail & Related papers (2023-08-30T08:46:46Z) - Existence and Minimax Theorems for Adversarial Surrogate Risks in Binary
Classification [16.626667055542086]
Adversarial training is one of the most popular methods for training methods robust to adversarial attacks.
We prove and existence, regularity, and minimax theorems for adversarial surrogate risks.
arXiv Detail & Related papers (2022-06-18T03:29:49Z) - Mitigating multiple descents: A model-agnostic framework for risk
monotonization [84.6382406922369]
We develop a general framework for risk monotonization based on cross-validation.
We propose two data-driven methodologies, namely zero- and one-step, that are akin to bagging and boosting.
arXiv Detail & Related papers (2022-05-25T17:41:40Z) - Efficient Risk-Averse Reinforcement Learning [79.61412643761034]
In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns.
We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a soft risk mechanism to bypass it.
We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks.
arXiv Detail & Related papers (2022-05-10T19:40:52Z) - Two steps to risk sensitivity [4.974890682815778]
conditional value-at-risk (CVaR) is a risk measure for modeling human and animal planning.
We adopt a conventional distributional approach to CVaR in a sequential setting and reanalyze the choices of human decision-makers.
We then consider a further critical property of risk sensitivity, namely time consistency, showing alternatives to this form of CVaR.
arXiv Detail & Related papers (2021-11-12T16:27:47Z) - PAC$^m$-Bayes: Narrowing the Empirical Risk Gap in the Misspecified
Bayesian Regime [75.19403612525811]
This work develops a multi-sample loss which can close the gap by spanning a trade-off between the two risks.
Empirical study demonstrates improvement to the predictive distribution.
arXiv Detail & Related papers (2020-10-19T16:08:34Z) - Learning Bounds for Risk-sensitive Learning [86.50262971918276]
In risk-sensitive learning, one aims to find a hypothesis that minimizes a risk-averse (or risk-seeking) measure of loss.
We study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents.
arXiv Detail & Related papers (2020-06-15T05:25:02Z) - Deep Survival Machines: Fully Parametric Survival Regression and
Representation Learning for Censored Data with Competing Risks [14.928328404160299]
We describe a new approach to estimating relative risks in time-to-event prediction problems with censored data.
Our approach does not require making strong assumptions of constant proportional hazard of the underlying survival distribution.
This is the first work involving fully parametric estimation of survival times with competing risks in the presence of censoring.
arXiv Detail & Related papers (2020-03-02T20:21:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.