On the Hardness of Robustness Transfer: A Perspective from Rademacher
Complexity over Symmetric Difference Hypothesis Space
- URL: http://arxiv.org/abs/2302.12351v1
- Date: Thu, 23 Feb 2023 22:15:20 GMT
- Title: On the Hardness of Robustness Transfer: A Perspective from Rademacher
Complexity over Symmetric Difference Hypothesis Space
- Authors: Yuyang Deng, Nidham Gazagnadou, Junyuan Hong, Mehrdad Mahdavi,
Lingjuan Lyu
- Abstract summary: We analyze a key complexity measure that controls the cross-domain generalization: the adversarial Rademacher complexity over em symmetric difference hypothesis space.
For linear models, we show that adversarial version of this complexity is always greater than the non-adversarial one.
Even though the robust domain adaptation is provably harder, we do find positive relation between robust learning and standard domain adaptation.
- Score: 33.25614346461152
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent studies demonstrated that the adversarially robust learning under
$\ell_\infty$ attack is harder to generalize to different domains than standard
domain adaptation. How to transfer robustness across different domains has been
a key question in domain adaptation field. To investigate the fundamental
difficulty behind adversarially robust domain adaptation (or robustness
transfer), we propose to analyze a key complexity measure that controls the
cross-domain generalization: the adversarial Rademacher complexity over {\em
symmetric difference hypothesis space} $\mathcal{H} \Delta \mathcal{H}$. For
linear models, we show that adversarial version of this complexity is always
greater than the non-adversarial one, which reveals the intrinsic hardness of
adversarially robust domain adaptation. We also establish upper bounds on this
complexity measure. Then we extend them to the ReLU neural network class by
upper bounding the adversarial Rademacher complexity in the binary
classification setting. Finally, even though the robust domain adaptation is
provably harder, we do find positive relation between robust learning and
standard domain adaptation. We explain \emph{how adversarial training helps
domain adaptation in terms of standard risk}. We believe our results initiate
the study of the generalization theory of adversarially robust domain
adaptation, and could shed lights on distributed adversarially robust learning
from heterogeneous sources, e.g., federated learning scenario.
Related papers
- On the Geometry of Regularization in Adversarial Training: High-Dimensional Asymptotics and Generalization Bounds [11.30047438005394]
This work investigates the question of how to choose the regularization norm $lVert cdot rVert$ in the context of high-dimensional adversarial training for binary classification.
We quantitatively characterize the relationship between perturbation size and the optimal choice of $lVert cdot rVert$, confirming the intuition that, in the data scarce regime, the type of regularization becomes increasingly important for adversarial training as perturbations grow in size.
arXiv Detail & Related papers (2024-10-21T14:53:12Z) - Boosting Adversarial Training via Fisher-Rao Norm-based Regularization [9.975998980413301]
We propose a novel regularization framework, called Logit-Oriented Adversarial Training (LOAT), which can mitigate the trade-off between robustness and accuracy.
Our experiments demonstrate that the proposed regularization strategy can boost the performance of the prevalent adversarial training algorithms.
arXiv Detail & Related papers (2024-03-26T09:22:37Z) - Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - Mapping conditional distributions for domain adaptation under
generalized target shift [0.0]
We consider the problem of unsupervised domain adaptation (UDA) between a source and a target domain under conditional and label shift a.k.a Generalized Target Shift (GeTarS)
Recent approaches learn domain-invariant representations, yet they have practical limitations and rely on strong assumptions that may not hold in practice.
In this paper, we explore a novel and general approach to align pretrained representations, which circumvents existing drawbacks.
arXiv Detail & Related papers (2021-10-26T14:25:07Z) - A Bit More Bayesian: Domain-Invariant Learning with Uncertainty [111.22588110362705]
Domain generalization is challenging due to the domain shift and the uncertainty caused by the inaccessibility of target domain data.
In this paper, we address both challenges with a probabilistic framework based on variational Bayesian inference.
We derive domain-invariant representations and classifiers, which are jointly established in a two-layer Bayesian neural network.
arXiv Detail & Related papers (2021-05-09T21:33:27Z) - Fundamental Limits and Tradeoffs in Invariant Representation Learning [99.2368462915979]
Many machine learning applications involve learning representations that achieve two competing goals.
Minimax game-theoretic formulation represents a fundamental tradeoff between accuracy and invariance.
We provide an information-theoretic analysis of this general and important problem under both classification and regression settings.
arXiv Detail & Related papers (2020-12-19T15:24:04Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Instance-Dependent Complexity of Contextual Bandits and Reinforcement
Learning: A Disagreement-Based Perspective [104.67295710363679]
In the classical multi-armed bandit problem, instance-dependent algorithms attain improved performance on "easy" problems with a gap between the best and second-best arm.
We introduce a family of complexity measures that are both sufficient and necessary to obtain instance-dependent regret bounds.
We then introduce new oracle-efficient algorithms which adapt to the gap whenever possible, while also attaining the minimax rate in the worst case.
arXiv Detail & Related papers (2020-10-07T01:33:06Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z) - Adversarial Learning Guarantees for Linear Hypotheses and Neural
Networks [45.06091849856641]
We give upper and lower bounds for the adversarial empirical Rademacher complexity of linear hypotheses.
We extend our analysis to provide Rademacher complexity lower and upper bounds for a single ReLU unit.
Finally, we give adversarial Rademacher complexity bounds for feed-forward neural networks with one hidden layer.
arXiv Detail & Related papers (2020-04-28T15:55:16Z) - Unsupervised Domain Adaptation via Discriminative Manifold Embedding and
Alignment [23.72562139715191]
Unsupervised domain adaptation is effective in leveraging the rich information from the source domain to the unsupervised target domain.
The hard-assigned pseudo labels on the target domain are risky to the intrinsic data structure.
A consistent manifold learning framework is proposed to achieve transferability and discriminability consistently.
arXiv Detail & Related papers (2020-02-20T11:06:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.