Adversarially Robust Estimate and Risk Analysis in Linear Regression
- URL: http://arxiv.org/abs/2012.10278v1
- Date: Fri, 18 Dec 2020 14:55:55 GMT
- Title: Adversarially Robust Estimate and Risk Analysis in Linear Regression
- Authors: Yue Xing, Ruizhi Zhang, Guang Cheng
- Abstract summary: Adversarially robust learning aims to design algorithms that are robust to small adversarial perturbations on input variables.
By discovering the statistical minimax rate of convergence of adversarially robust estimators, we emphasize the importance of incorporating model information.
We propose a straightforward two-stage adversarial learning framework, which facilitates to utilize model structure information to improve adversarial robustness.
- Score: 17.931533943788335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarially robust learning aims to design algorithms that are robust to
small adversarial perturbations on input variables. Beyond the existing studies
on the predictive performance to adversarial samples, our goal is to understand
statistical properties of adversarially robust estimates and analyze
adversarial risk in the setup of linear regression models. By discovering the
statistical minimax rate of convergence of adversarially robust estimators, we
emphasize the importance of incorporating model information, e.g., sparsity, in
adversarially robust learning. Further, we reveal an explicit connection of
adversarial and standard estimates, and propose a straightforward two-stage
adversarial learning framework, which facilitates to utilize model structure
information to improve adversarial robustness. In theory, the consistency of
the adversarially robust estimator is proven and its Bahadur representation is
also developed for the statistical inference purpose. The proposed estimator
converges in a sharp rate under either low-dimensional or sparse scenario.
Moreover, our theory confirms two phenomena in adversarially robust learning:
adversarial robustness hurts generalization, and unlabeled data help improve
the generalization. In the end, we conduct numerical simulations to verify our
theory.
Related papers
- Exploring the Adversarial Frontier: Quantifying Robustness via
Adversarial Hypervolume [18.4516572499628]
We propose a new metric termed adversarial hypervolume, assessing the robustness of deep learning models comprehensively over a range of perturbation intensities.
We adopt a novel training algorithm that enhances adversarial robustness uniformly across various perturbation intensities.
This research contributes a new measure of robustness and establishes a standard for assessing benchmarking and the resilience of current and future defensive models against adversarial threats.
arXiv Detail & Related papers (2024-03-08T07:03:18Z) - Generating Less Certain Adversarial Examples Improves Robust Generalization [22.00283527210342]
This paper revisits the robust overfitting phenomenon of adversarial training.
We argue that overconfidence in predicting adversarial examples is a potential cause.
We propose a formal definition of adversarial certainty that captures the variance of the model's predicted logits on adversarial examples.
arXiv Detail & Related papers (2023-10-06T19:06:13Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Bayesian Learning with Information Gain Provably Bounds Risk for a
Robust Adversarial Defense [27.545466364906773]
We present a new algorithm to learn a deep neural network model robust against adversarial attacks.
Our model demonstrate significantly improved robustness--up to 20%--compared with adversarial training and Adv-BNN under PGD attacks.
arXiv Detail & Related papers (2022-12-05T03:26:08Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - On the Generalization Properties of Adversarial Training [21.79888306754263]
This paper studies the generalization performance of a generic adversarial training algorithm.
A series of numerical studies are conducted to demonstrate how the smoothness and L1 penalization help improve the adversarial robustness of models.
arXiv Detail & Related papers (2020-08-15T02:32:09Z) - Precise Tradeoffs in Adversarial Training for Linear Regression [55.764306209771405]
We provide a precise and comprehensive understanding of the role of adversarial training in the context of linear regression with Gaussian features.
We precisely characterize the standard/robust accuracy and the corresponding tradeoff achieved by a contemporary mini-max adversarial training approach.
Our theory for adversarial training algorithms also facilitates the rigorous study of how a variety of factors (size and quality of training data, model overparametrization etc.) affect the tradeoff between these two competing accuracies.
arXiv Detail & Related papers (2020-02-24T19:01:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.