Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests
- URL: http://arxiv.org/abs/2202.03077v1
- Date: Mon, 7 Feb 2022 11:18:04 GMT
- Title: Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests
- Authors: Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan
Kankanhalli
- Abstract summary: This paper systematically uncovers the failure mode of non-parametric TSTs through adversarial attacks.
To enable TST-agnostic attacks, we propose an ensemble attack framework that jointly minimizes the different types of test criteria.
To robustify TSTs, we propose a max-min optimization that iteratively generates adversarial pairs to train the deep kernels.
- Score: 73.32304304788838
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Non-parametric two-sample tests (TSTs) that judge whether two sets of samples
are drawn from the same distribution, have been widely used in the analysis of
critical data. People tend to employ TSTs as trusted basic tools and rarely
have any doubt about their reliability. This paper systematically uncovers the
failure mode of non-parametric TSTs through adversarial attacks and then
proposes corresponding defense strategies. First, we theoretically show that an
adversary can upper-bound the distributional shift which guarantees the
attack's invisibility. Furthermore, we theoretically find that the adversary
can also degrade the lower bound of a TST's test power, which enables us to
iteratively minimize the test criterion in order to search for adversarial
pairs. To enable TST-agnostic attacks, we propose an ensemble attack (EA)
framework that jointly minimizes the different types of test criteria. Second,
to robustify TSTs, we propose a max-min optimization that iteratively generates
adversarial pairs to train the deep kernels. Extensive experiments on both
simulated and real-world datasets validate the adversarial vulnerabilities of
non-parametric TSTs and the effectiveness of our proposed defense.
Related papers
- Robust Kernel Hypothesis Testing under Data Corruption [6.430258446597413]
We propose two general methods for constructing robust permutation tests under data corruption.
We prove their consistency in power under minimal conditions.
This contributes to the practical deployment of hypothesis tests for real-world applications with potential adversarial attacks.
arXiv Detail & Related papers (2024-05-30T10:23:16Z) - The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Robust Feature Inference: A Test-time Defense Strategy using Spectral Projections [12.807619042576018]
We propose a novel test-time defense strategy called Robust Feature Inference (RFI)
RFI is easy to integrate with any existing (robust) training procedure without additional test-time computation.
We show that RFI improves robustness across adaptive and transfer attacks consistently.
arXiv Detail & Related papers (2023-07-21T16:18:58Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Generalized Likelihood Ratio Test for Adversarially Robust Hypothesis
Testing [22.93223530210401]
We consider a classical hypothesis testing problem in order to develop insight into defending against such adversarial perturbations.
We propose a defense based on applying the generalized likelihood ratio test (GLRT) to the resulting composite hypothesis testing problem.
We show via simulations that the GLRT defense is competitive with the minimax approach under the worst-case attack, while yielding a better-accuracy tradeoff under weaker attacks.
arXiv Detail & Related papers (2021-12-04T01:11:54Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.