Multi-Expert Adversarial Attack Detection in Person Re-identification
Using Context Inconsistency
- URL: http://arxiv.org/abs/2108.09891v1
- Date: Mon, 23 Aug 2021 01:59:09 GMT
- Title: Multi-Expert Adversarial Attack Detection in Person Re-identification
Using Context Inconsistency
- Authors: Xueping Wang, Shasha Li, Min Liu, Yaonan Wang and Amit K.
Roy-Chowdhury
- Abstract summary: We propose a Multi-Expert Adversarial Attack Detection (MEAAD) approach to detect malicious attacks on person re-identification (ReID) systems.
As the first adversarial attack detection approach for ReID,MEAADeffectively detects various adversarial at-tacks and achieves high ROC-AUC (over 97.5%).
- Score: 47.719533482898306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of deep neural networks (DNNs) haspromoted the widespread
applications of person re-identification (ReID). However, ReID systems inherit
thevulnerability of DNNs to malicious attacks of visually in-conspicuous
adversarial perturbations. Detection of adver-sarial attacks is, therefore, a
fundamental requirement forrobust ReID systems. In this work, we propose a
Multi-Expert Adversarial Attack Detection (MEAAD) approach toachieve this goal
by checking context inconsistency, whichis suitable for any DNN-based ReID
systems. Specifically,three kinds of context inconsistencies caused by
adversar-ial attacks are employed to learn a detector for distinguish-ing the
perturbed examples, i.e., a) the embedding distancesbetween a perturbed query
person image and its top-K re-trievals are generally larger than those between
a benignquery image and its top-K retrievals, b) the embedding dis-tances among
the top-K retrievals of a perturbed query im-age are larger than those of a
benign query image, c) thetop-K retrievals of a benign query image obtained
with mul-tiple expert ReID models tend to be consistent, which isnot preserved
when attacks are present. Extensive exper-iments on the Market1501 and
DukeMTMC-ReID datasetsshow that, as the first adversarial attack detection
approachfor ReID,MEAADeffectively detects various adversarial at-tacks and
achieves high ROC-AUC (over 97.5%).
Related papers
- Black-box Adversarial Attacks against Dense Retrieval Models: A
Multi-view Contrastive Learning Method [115.29382166356478]
We introduce the adversarial retrieval attack (AREA) task.
It is meant to trick DR models into retrieving a target document that is outside the initial set of candidate documents retrieved by the DR model.
We find that the promising results that have previously been reported on attacking NRMs, do not generalize to DR models.
We propose to formalize attacks on DR models as a contrastive learning problem in a multi-view representation space.
arXiv Detail & Related papers (2023-08-19T00:24:59Z) - Unfolding Local Growth Rate Estimates for (Almost) Perfect Adversarial
Detection [22.99930028876662]
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks.
Current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system.
We propose a simple and light-weight detector, which leverages recent findings on the relation between networks' local intrinsic dimensionality (LID) and adversarial attacks.
arXiv Detail & Related papers (2022-12-13T17:51:32Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Detection of Adversarial Supports in Few-shot Classifiers Using Feature
Preserving Autoencoders and Self-Similarity [89.26308254637702]
We propose a detection strategy to highlight adversarial support sets.
We make use of feature preserving autoencoder filtering and also the concept of self-similarity of a support set to perform this detection.
Our method is attack-agnostic and also the first to explore detection for few-shot classifiers to the best of our knowledge.
arXiv Detail & Related papers (2020-12-09T14:13:41Z) - Connecting the Dots: Detecting Adversarial Perturbations Using Context
Inconsistency [25.039201331256372]
We augment the Deep Neural Network with a system that learns context consistency rules during training and checks for the violations of the same during testing.
Our approach builds a set of auto-encoders, one for each object class, appropriately trained so as to output a discrepancy between the input and output if an added adversarial perturbation violates context consistency rules.
Experiments on PASCAL VOC and MS COCO show that our method effectively detects various adversarial attacks and achieves high ROC-AUC (over 0.95 in most cases)
arXiv Detail & Related papers (2020-07-19T19:46:45Z) - Transferable, Controllable, and Inconspicuous Adversarial Attacks on
Person Re-identification With Deep Mis-Ranking [83.48804199140758]
We propose a learning-to-mis-rank formulation to perturb the ranking of the system output.
We also perform a back-box attack by developing a novel multi-stage network architecture.
Our method can control the number of malicious pixels by using differentiable multi-shot sampling.
arXiv Detail & Related papers (2020-04-08T18:48:29Z) - RAID: Randomized Adversarial-Input Detection for Neural Networks [7.37305608518763]
We propose a novel technique for adversarial-image detection, RAID, that trains a secondary classifier to identify differences in neuron activation values between benign and adversarial inputs.
RAID is more reliable and more effective than the state of the art when evaluated against six popular attacks.
arXiv Detail & Related papers (2020-02-07T13:27:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.