Generalized Likelihood Ratio Test for Adversarially Robust Hypothesis
Testing
- URL: http://arxiv.org/abs/2112.02209v1
- Date: Sat, 4 Dec 2021 01:11:54 GMT
- Title: Generalized Likelihood Ratio Test for Adversarially Robust Hypothesis
Testing
- Authors: Bhagyashree Puranik, Upamanyu Madhow, Ramtin Pedarsani
- Abstract summary: We consider a classical hypothesis testing problem in order to develop insight into defending against such adversarial perturbations.
We propose a defense based on applying the generalized likelihood ratio test (GLRT) to the resulting composite hypothesis testing problem.
We show via simulations that the GLRT defense is competitive with the minimax approach under the worst-case attack, while yielding a better-accuracy tradeoff under weaker attacks.
- Score: 22.93223530210401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning models are known to be susceptible to adversarial attacks
which can cause misclassification by introducing small but well designed
perturbations. In this paper, we consider a classical hypothesis testing
problem in order to develop fundamental insight into defending against such
adversarial perturbations. We interpret an adversarial perturbation as a
nuisance parameter, and propose a defense based on applying the generalized
likelihood ratio test (GLRT) to the resulting composite hypothesis testing
problem, jointly estimating the class of interest and the adversarial
perturbation. While the GLRT approach is applicable to general multi-class
hypothesis testing, we first evaluate it for binary hypothesis testing in white
Gaussian noise under $\ell_{\infty}$ norm-bounded adversarial perturbations,
for which a known minimax defense optimizing for the worst-case attack provides
a benchmark. We derive the worst-case attack for the GLRT defense, and show
that its asymptotic performance (as the dimension of the data increases)
approaches that of the minimax defense. For non-asymptotic regimes, we show via
simulations that the GLRT defense is competitive with the minimax approach
under the worst-case attack, while yielding a better robustness-accuracy
tradeoff under weaker attacks. We also illustrate the GLRT approach for a
multi-class hypothesis testing problem, for which a minimax strategy is not
known, evaluating its performance under both noise-agnostic and noise-aware
adversarial settings, by providing a method to find optimal noise-aware
attacks, and heuristics to find noise-agnostic attacks that are close to
optimal in the high SNR regime.
Related papers
- Minimax rates of convergence for nonparametric regression under adversarial attacks [3.244945627960733]
We theoretically analyse the limits of robustness against adversarial attacks in a nonparametric regression setting.
Our work reveals that the minimax rate under adversarial attacks in the input is the same as sum of two terms.
arXiv Detail & Related papers (2024-10-12T07:11:38Z) - The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - Non-Convex Robust Hypothesis Testing using Sinkhorn Uncertainty Sets [18.46110328123008]
We present a new framework to address the non-robust hypothesis testing problem.
The goal is to seek the optimal detector that minimizes the maximum numerical risk.
arXiv Detail & Related papers (2024-03-21T20:29:43Z) - Low-Cost High-Power Membership Inference Attacks [15.240271537329534]
Membership inference attacks aim to detect if a particular data point was used in training a model.
We design a novel statistical test to perform robust membership inference attacks with low computational overhead.
RMIA lays the groundwork for practical yet accurate data privacy risk assessment in machine learning.
arXiv Detail & Related papers (2023-12-06T03:18:49Z) - Generalised Likelihood Ratio Testing Adversaries through the
Differential Privacy Lens [69.10072367807095]
Differential Privacy (DP) provides tight upper bounds on the capabilities of optimal adversaries.
We relax the assumption of a Neyman--Pearson (NPO) adversary to a Generalized Likelihood Test (GLRT) adversary.
This mild relaxation leads to improved privacy guarantees.
arXiv Detail & Related papers (2022-10-24T08:24:10Z) - Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests [73.32304304788838]
This paper systematically uncovers the failure mode of non-parametric TSTs through adversarial attacks.
To enable TST-agnostic attacks, we propose an ensemble attack framework that jointly minimizes the different types of test criteria.
To robustify TSTs, we propose a max-min optimization that iteratively generates adversarial pairs to train the deep kernels.
arXiv Detail & Related papers (2022-02-07T11:18:04Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Risk Minimization from Adaptively Collected Data: Guarantees for
Supervised and Policy Learning [57.88785630755165]
Empirical risk minimization (ERM) is the workhorse of machine learning, but its model-agnostic guarantees can fail when we use adaptively collected data.
We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class.
For policy learning, we provide rate-optimal regret guarantees that close an open gap in the existing literature whenever exploration decays to zero.
arXiv Detail & Related papers (2021-06-03T09:50:13Z) - Adversarially Robust Classification based on GLRT [26.44693169694826]
We show a defense strategy based on the generalized likelihood ratio test (GLRT), which jointly estimates the class of interest and the adversarial perturbation.
We show that the GLRT approach yields performance competitive with that of the minimax approach under the worst-case attack.
We also observe that the GLRT defense generalizes naturally to more complex models for which optimal minimax classifiers are not known.
arXiv Detail & Related papers (2020-11-16T10:16:05Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.