Adversarial Attack and Defense in Deep Ranking
- URL: http://arxiv.org/abs/2106.03614v1
- Date: Mon, 7 Jun 2021 13:41:45 GMT
- Title: Adversarial Attack and Defense in Deep Ranking
- Authors: Mo Zhou, Le Wang, Zhenxing Niu, Qilin Zhang, Nanning Zheng, Gang Hua
- Abstract summary: We propose two attacks against deep ranking systems that can raise or lower the rank of chosen candidates by adversarial perturbations.
Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks.
Our adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196 and Stanford Online Products datasets.
- Score: 100.17641539999055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Network classifiers are vulnerable to adversarial attack, where
an imperceptible perturbation could result in misclassification. However, the
vulnerability of DNN-based image ranking systems remains under-explored. In
this paper, we propose two attacks against deep ranking systems, i.e.,
Candidate Attack and Query Attack, that can raise or lower the rank of chosen
candidates by adversarial perturbations. Specifically, the expected ranking
order is first represented as a set of inequalities, and then a triplet-like
objective function is designed to obtain the optimal perturbation. Conversely,
an anti-collapse triplet defense is proposed to improve the ranking model
robustness against all proposed attacks, where the model learns to prevent the
positive and negative samples being pulled close to each other by adversarial
attack. To comprehensively measure the empirical adversarial robustness of a
ranking model with our defense, we propose an empirical robustness score, which
involves a set of representative attacks against ranking models. Our
adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST,
CUB200-2011, CARS196 and Stanford Online Products datasets. Experimental
results demonstrate that a typical deep ranking system can be effectively
compromised by our attacks. Nevertheless, our defense can significantly improve
the ranking system robustness, and simultaneously mitigate a wide range of
attacks.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Preference Poisoning Attacks on Reward Model Learning [47.00395978031771]
We investigate the nature and extent of a vulnerability in learning reward models from pairwise comparisons.
We propose two classes of algorithmic approaches for these attacks: a gradient-based framework, and several variants of rank-by-distance methods.
We find that the best attacks are often highly successful, achieving in the most extreme case 100% success rate with only 0.3% of the data poisoned.
arXiv Detail & Related papers (2024-02-02T21:45:24Z) - Order-Disorder: Imitation Adversarial Attacks for Black-box Neural
Ranking Models [48.93128542994217]
We propose an imitation adversarial attack on black-box neural passage ranking models.
We show that the target passage ranking model can be transparentized and imitated by enumerating critical queries/candidates.
We also propose an innovative gradient-based attack method, empowered by the pairwise objective function, to generate adversarial triggers.
arXiv Detail & Related papers (2022-09-14T09:10:07Z) - A Tale of HodgeRank and Spectral Method: Target Attack Against Rank
Aggregation Is the Fixed Point of Adversarial Game [153.74942025516853]
The intrinsic vulnerability of the rank aggregation methods is not well studied in the literature.
In this paper, we focus on the purposeful adversary who desires to designate the aggregated results by modifying the pairwise data.
The effectiveness of the suggested target attack strategies is demonstrated by a series of toy simulations and several real-world data experiments.
arXiv Detail & Related papers (2022-09-13T05:59:02Z) - Practical Relative Order Attack in Deep Ranking [99.332629807873]
We formulate a new adversarial attack against deep ranking systems, i.e., the Order Attack.
The Order Attack covertly alters the relative order among a selected set of candidates according to an attacker-specified permutation.
It is successfully implemented on a major e-commerce platform.
arXiv Detail & Related papers (2021-03-09T06:41:18Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Adversarial Ranking Attack and Defense [36.221005892593595]
We propose two attacks against deep ranking systems that can raise or lower the rank of chosen candidates by adversarial perturbations.
A defense method is also proposed to improve the ranking robustness system, which can mitigate all the proposed attacks simultaneously.
Our adversarial ranking attacks and defense are evaluated on datasets including MNIST, Fashion-MNIST, and Stanford-Online-Products.
arXiv Detail & Related papers (2020-02-26T04:03:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.