Minimax rates of convergence for nonparametric regression under adversarial attacks
- URL: http://arxiv.org/abs/2410.09402v1
- Date: Sat, 12 Oct 2024 07:11:38 GMT
- Title: Minimax rates of convergence for nonparametric regression under adversarial attacks
- Authors: Jingfu Peng, Yuhong Yang,
- Abstract summary: We theoretically analyse the limits of robustness against adversarial attacks in a nonparametric regression setting.
Our work reveals that the minimax rate under adversarial attacks in the input is the same as sum of two terms.
- Score: 3.244945627960733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research shows the susceptibility of machine learning models to adversarial attacks, wherein minor but maliciously chosen perturbations of the input can significantly degrade model performance. In this paper, we theoretically analyse the limits of robustness against such adversarial attacks in a nonparametric regression setting, by examining the minimax rates of convergence in an adversarial sup-norm. Our work reveals that the minimax rate under adversarial attacks in the input is the same as sum of two terms: one represents the minimax rate in the standard setting without adversarial attacks, and the other reflects the maximum deviation of the true regression function value within the target function class when subjected to the input perturbations. The optimal rates under the adversarial setup can be achieved by a plug-in procedure constructed from a minimax optimal estimator in the corresponding standard setting. Two specific examples are given to illustrate the established minimax results.
Related papers
- A Universal Class of Sharpness-Aware Minimization Algorithms [57.29207151446387]
We introduce a new class of sharpness measures, leading to new sharpness-aware objective functions.
We prove that these measures are textitly expressive, allowing any function of the training loss Hessian matrix to be represented by appropriate hyper and determinants.
arXiv Detail & Related papers (2024-06-06T01:52:09Z) - Regularization properties of adversarially-trained linear regression [5.7077257711082785]
State-of-the-art machine learning models can be vulnerable to very small input perturbations.
Adversarial training is an effective approach to defend against it.
arXiv Detail & Related papers (2023-10-16T20:09:58Z) - Score Attack: A Lower Bound Technique for Optimal Differentially Private
Learning [8.760651633031342]
We propose a novel approach called the score attack, which provides a lower bound on the differential-privacy-constrained minimax risk of parameter estimation.
It can optimally lower bound the minimax risk of estimating unknown model parameters, up to a logarithmic factor, while ensuring differential privacy for a range of statistical problems.
arXiv Detail & Related papers (2023-03-13T14:26:27Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Generalized Likelihood Ratio Test for Adversarially Robust Hypothesis
Testing [22.93223530210401]
We consider a classical hypothesis testing problem in order to develop insight into defending against such adversarial perturbations.
We propose a defense based on applying the generalized likelihood ratio test (GLRT) to the resulting composite hypothesis testing problem.
We show via simulations that the GLRT defense is competitive with the minimax approach under the worst-case attack, while yielding a better-accuracy tradeoff under weaker attacks.
arXiv Detail & Related papers (2021-12-04T01:11:54Z) - Minimax Off-Policy Evaluation for Multi-Armed Bandits [58.7013651350436]
We study the problem of off-policy evaluation in the multi-armed bandit model with bounded rewards.
We develop minimax rate-optimal procedures under three settings.
arXiv Detail & Related papers (2021-01-19T18:55:29Z) - Shaping Deep Feature Space towards Gaussian Mixture for Visual
Classification [74.48695037007306]
We propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification.
With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution.
The proposed model can be implemented easily and efficiently without using extra trainable parameters.
arXiv Detail & Related papers (2020-11-18T03:32:27Z) - Adversarially Robust Classification based on GLRT [26.44693169694826]
We show a defense strategy based on the generalized likelihood ratio test (GLRT), which jointly estimates the class of interest and the adversarial perturbation.
We show that the GLRT approach yields performance competitive with that of the minimax approach under the worst-case attack.
We also observe that the GLRT defense generalizes naturally to more complex models for which optimal minimax classifiers are not known.
arXiv Detail & Related papers (2020-11-16T10:16:05Z) - Asymptotic Behavior of Adversarial Training in Binary Classification [41.7567932118769]
Adversarial training is considered to be the state-of-the-art method for defense against adversarial attacks.
Despite being successful in practice, several problems in understanding performance of adversarial training remain open.
We derive precise theoretical predictions for the minimization of adversarial training in binary classification.
arXiv Detail & Related papers (2020-10-26T01:44:20Z) - An Investigation of Why Overparameterization Exacerbates Spurious
Correlations [98.3066727301239]
We identify two key properties of the training data that drive this behavior.
We show how the inductive bias of models towards "memorizing" fewer examples can cause over parameterization to hurt.
arXiv Detail & Related papers (2020-05-09T01:59:13Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.