Transferable, Controllable, and Inconspicuous Adversarial Attacks on
Person Re-identification With Deep Mis-Ranking
- URL: http://arxiv.org/abs/2004.04199v1
- Date: Wed, 8 Apr 2020 18:48:29 GMT
- Title: Transferable, Controllable, and Inconspicuous Adversarial Attacks on
Person Re-identification With Deep Mis-Ranking
- Authors: Hongjun Wang, Guangrun Wang, Ya Li, Dongyu Zhang, and Liang Lin
- Abstract summary: We propose a learning-to-mis-rank formulation to perturb the ranking of the system output.
We also perform a back-box attack by developing a novel multi-stage network architecture.
Our method can control the number of malicious pixels by using differentiable multi-shot sampling.
- Score: 83.48804199140758
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of DNNs has driven the extensive applications of person
re-identification (ReID) into a new era. However, whether ReID inherits the
vulnerability of DNNs remains unexplored. To examine the robustness of ReID
systems is rather important because the insecurity of ReID systems may cause
severe losses, e.g., the criminals may use the adversarial perturbations to
cheat the CCTV systems. In this work, we examine the insecurity of current
best-performing ReID models by proposing a learning-to-mis-rank formulation to
perturb the ranking of the system output. As the cross-dataset transferability
is crucial in the ReID domain, we also perform a back-box attack by developing
a novel multi-stage network architecture that pyramids the features of
different levels to extract general and transferable features for the
adversarial perturbations. Our method can control the number of malicious
pixels by using differentiable multi-shot sampling. To guarantee the
inconspicuousness of the attack, we also propose a new perception loss to
achieve better visual quality. Extensive experiments on four of the largest
ReID benchmarks (i.e., Market1501 [45], CUHK03 [18], DukeMTMC [33], and MSMT17
[40]) not only show the effectiveness of our method, but also provides
directions of the future improvement in the robustness of ReID systems. For
example, the accuracy of one of the best-performing ReID systems drops sharply
from 91.8% to 1.4% after being attacked by our method. Some attack results are
shown in Fig. 1. The code is available at
https://github.com/whj363636/Adversarial-attack-on-Person-ReID-With-Deep-Mis-Ranking.
Related papers
- Multi-agent Reinforcement Learning-based Network Intrusion Detection System [3.4636217357968904]
Intrusion Detection Systems (IDS) play a crucial role in ensuring the security of computer networks.
We propose a novel multi-agent reinforcement learning (RL) architecture, enabling automatic, efficient, and robust network intrusion detection.
Our solution introduces a resilient architecture designed to accommodate the addition of new attacks and effectively adapt to changes in existing attack patterns.
arXiv Detail & Related papers (2024-07-08T09:18:59Z) - Corpus Poisoning via Approximate Greedy Gradient Descent [48.5847914481222]
We propose Approximate Greedy Gradient Descent, a new attack on dense retrieval systems based on the widely used HotFlip method for generating adversarial passages.
We show that our method achieves a high attack success rate on several datasets and using several retrievers, and can generalize to unseen queries and new domains.
arXiv Detail & Related papers (2024-06-07T17:02:35Z) - Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - Combining Two Adversarial Attacks Against Person Re-Identification
Systems [0.0]
We focus on adversarial attacks on Re-ID systems, which can be a critical threat to the performance of these systems.
We combine the use of two types of adversarial attacks, P-FGSM and Deep Mis-Ranking, applied to two popular Re-ID models.
The best result demonstrates a decrease of 3.36% in the Rank-10 metric for ReID applied to CUHK03.
arXiv Detail & Related papers (2023-09-24T22:22:29Z) - Benchmarks for Corruption Invariant Person Re-identification [31.919264399996475]
We study corruption invariant learning in single- and cross-modality datasets, including Market-1501, CUHK03, MSMT17, RegDB, SYSU-MM01.
transformer-based models are more robust towards corrupted images, compared with CNN-based models.
Cross-dataset generalization improves with corruption robustness increases.
arXiv Detail & Related papers (2021-11-01T12:14:28Z) - Multi-Expert Adversarial Attack Detection in Person Re-identification
Using Context Inconsistency [47.719533482898306]
We propose a Multi-Expert Adversarial Attack Detection (MEAAD) approach to detect malicious attacks on person re-identification (ReID) systems.
As the first adversarial attack detection approach for ReID,MEAADeffectively detects various adversarial at-tacks and achieves high ROC-AUC (over 97.5%).
arXiv Detail & Related papers (2021-08-23T01:59:09Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.