Multi-Expert Adversarial Attack Detection in Person Re-identification
Using Context Inconsistency
- URL: http://arxiv.org/abs/2108.09891v1
- Date: Mon, 23 Aug 2021 01:59:09 GMT
- Title: Multi-Expert Adversarial Attack Detection in Person Re-identification
Using Context Inconsistency
- Authors: Xueping Wang, Shasha Li, Min Liu, Yaonan Wang and Amit K.
Roy-Chowdhury
- Abstract summary: We propose a Multi-Expert Adversarial Attack Detection (MEAAD) approach to detect malicious attacks on person re-identification (ReID) systems.
As the first adversarial attack detection approach for ReID,MEAADeffectively detects various adversarial at-tacks and achieves high ROC-AUC (over 97.5%).
- Score: 47.719533482898306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of deep neural networks (DNNs) haspromoted the widespread
applications of person re-identification (ReID). However, ReID systems inherit
thevulnerability of DNNs to malicious attacks of visually in-conspicuous
adversarial perturbations. Detection of adver-sarial attacks is, therefore, a
fundamental requirement forrobust ReID systems. In this work, we propose a
Multi-Expert Adversarial Attack Detection (MEAAD) approach toachieve this goal
by checking context inconsistency, whichis suitable for any DNN-based ReID
systems. Specifically,three kinds of context inconsistencies caused by
adversar-ial attacks are employed to learn a detector for distinguish-ing the
perturbed examples, i.e., a) the embedding distancesbetween a perturbed query
person image and its top-K re-trievals are generally larger than those between
a benignquery image and its top-K retrievals, b) the embedding dis-tances among
the top-K retrievals of a perturbed query im-age are larger than those of a
benign query image, c) thetop-K retrievals of a benign query image obtained
with mul-tiple expert ReID models tend to be consistent, which isnot preserved
when attacks are present. Extensive exper-iments on the Market1501 and
DukeMTMC-ReID datasetsshow that, as the first adversarial attack detection
approachfor ReID,MEAADeffectively detects various adversarial at-tacks and
achieves high ROC-AUC (over 97.5%).
Related papers
- Detecting Adversarial Attacks in Semantic Segmentation via Uncertainty Estimation: A Deep Analysis [12.133306321357999]
We propose an uncertainty-based method for detecting adversarial attacks on neural networks for semantic segmentation.
We conduct a detailed analysis of uncertainty-based detection of adversarial attacks and various state-of-the-art neural networks.
Our numerical experiments show the effectiveness of the proposed uncertainty-based detection method.
arXiv Detail & Related papers (2024-08-19T14:13:30Z) - AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - Black-box Adversarial Attacks against Dense Retrieval Models: A
Multi-view Contrastive Learning Method [115.29382166356478]
We introduce the adversarial retrieval attack (AREA) task.
It is meant to trick DR models into retrieving a target document that is outside the initial set of candidate documents retrieved by the DR model.
We find that the promising results that have previously been reported on attacking NRMs, do not generalize to DR models.
We propose to formalize attacks on DR models as a contrastive learning problem in a multi-view representation space.
arXiv Detail & Related papers (2023-08-19T00:24:59Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Connecting the Dots: Detecting Adversarial Perturbations Using Context
Inconsistency [25.039201331256372]
We augment the Deep Neural Network with a system that learns context consistency rules during training and checks for the violations of the same during testing.
Our approach builds a set of auto-encoders, one for each object class, appropriately trained so as to output a discrepancy between the input and output if an added adversarial perturbation violates context consistency rules.
Experiments on PASCAL VOC and MS COCO show that our method effectively detects various adversarial attacks and achieves high ROC-AUC (over 0.95 in most cases)
arXiv Detail & Related papers (2020-07-19T19:46:45Z) - Transferable, Controllable, and Inconspicuous Adversarial Attacks on
Person Re-identification With Deep Mis-Ranking [83.48804199140758]
We propose a learning-to-mis-rank formulation to perturb the ranking of the system output.
We also perform a back-box attack by developing a novel multi-stage network architecture.
Our method can control the number of malicious pixels by using differentiable multi-shot sampling.
arXiv Detail & Related papers (2020-04-08T18:48:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.