Transferable, Controllable, and Inconspicuous Adversarial Attacks on
Person Re-identification With Deep Mis-Ranking
- URL: http://arxiv.org/abs/2004.04199v1
- Date: Wed, 8 Apr 2020 18:48:29 GMT
- Title: Transferable, Controllable, and Inconspicuous Adversarial Attacks on
Person Re-identification With Deep Mis-Ranking
- Authors: Hongjun Wang, Guangrun Wang, Ya Li, Dongyu Zhang, and Liang Lin
- Abstract summary: We propose a learning-to-mis-rank formulation to perturb the ranking of the system output.
We also perform a back-box attack by developing a novel multi-stage network architecture.
Our method can control the number of malicious pixels by using differentiable multi-shot sampling.
- Score: 83.48804199140758
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of DNNs has driven the extensive applications of person
re-identification (ReID) into a new era. However, whether ReID inherits the
vulnerability of DNNs remains unexplored. To examine the robustness of ReID
systems is rather important because the insecurity of ReID systems may cause
severe losses, e.g., the criminals may use the adversarial perturbations to
cheat the CCTV systems. In this work, we examine the insecurity of current
best-performing ReID models by proposing a learning-to-mis-rank formulation to
perturb the ranking of the system output. As the cross-dataset transferability
is crucial in the ReID domain, we also perform a back-box attack by developing
a novel multi-stage network architecture that pyramids the features of
different levels to extract general and transferable features for the
adversarial perturbations. Our method can control the number of malicious
pixels by using differentiable multi-shot sampling. To guarantee the
inconspicuousness of the attack, we also propose a new perception loss to
achieve better visual quality. Extensive experiments on four of the largest
ReID benchmarks (i.e., Market1501 [45], CUHK03 [18], DukeMTMC [33], and MSMT17
[40]) not only show the effectiveness of our method, but also provides
directions of the future improvement in the robustness of ReID systems. For
example, the accuracy of one of the best-performing ReID systems drops sharply
from 91.8% to 1.4% after being attacked by our method. Some attack results are
shown in Fig. 1. The code is available at
https://github.com/whj363636/Adversarial-attack-on-Person-ReID-With-Deep-Mis-Ranking.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.