A Person Re-identification Data Augmentation Method with Adversarial
Defense Effect
- URL: http://arxiv.org/abs/2101.08783v2
- Date: Wed, 10 Feb 2021 07:41:10 GMT
- Title: A Person Re-identification Data Augmentation Method with Adversarial
Defense Effect
- Authors: Yunpeng Gong and Zhiyong Zeng and Liwen Chen and Yifan Luo and Bin
Weng and Feng Ye
- Abstract summary: We propose a ReID multi-modal data augmentation method with adversarial defense effect.
The proposed method performs well on multiple datasets, and successfully defends the attack of MS-SSIM proposed by CVPR 2020 against ReID.
- Score: 5.8377608127737375
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The security of the Person Re-identification(ReID) model plays a decisive
role in the application of ReID. However, deep neural networks have been shown
to be vulnerable, and adding undetectable adversarial perturbations to clean
images can trick deep neural networks that perform well in clean images. We
propose a ReID multi-modal data augmentation method with adversarial defense
effect: 1) Grayscale Patch Replacement, it consists of Local Grayscale Patch
Replacement(LGPR) and Global Grayscale Patch Replacement(GGPR). This method can
not only improve the accuracy of the model, but also help the model defend
against adversarial examples; 2) Multi-Modal Defense, it integrates three
homogeneous modal images of visible, grayscale and sketch, and further
strengthens the defense ability of the model. These methods fuse different
modalities of homogeneous images to enrich the input sample variety, the
variaty of samples will reduce the over-fitting of the ReID model to color
variations and make the adversarial space of the dataset that the attack method
can find difficult to align, thus the accuracy of model is improved, and the
attack effect is greatly reduced. The more modal homogeneous images are fused,
the stronger the defense capabilities is . The proposed method performs well on
multiple datasets, and successfully defends the attack of MS-SSIM proposed by
CVPR2020 against ReID [10], and increases the accuracy by 467 times(0.2% to
93.3%).The code is available at
https://github.com/finger-monkey/ReID_Adversarial_Defense.
Related papers
- Boosting Adversarial Transferability Against Defenses via Multi-Scale Transformation [0.8388591755871736]
The transferability of adversarial examples poses a significant security challenge for deep neural networks.<n>We propose a new Segmented Gaussian Pyramid (SGP) attack method to enhance the transferability.<n>In contrast to the state-of-the-art methods, SGP significantly enhances attack success rates against black-box defense models.
arXiv Detail & Related papers (2025-07-02T15:16:30Z) - MOREL: Enhancing Adversarial Robustness through Multi-Objective Representation Learning [1.534667887016089]
deep neural networks (DNNs) are vulnerable to slight adversarial perturbations.
We show that strong feature representation learning during training can significantly enhance the original model's robustness.
We propose MOREL, a multi-objective feature representation learning approach, encouraging classification models to produce similar features for inputs within the same class, despite perturbations.
arXiv Detail & Related papers (2024-10-02T16:05:03Z) - Anomaly Unveiled: Securing Image Classification against Adversarial
Patch Attacks [3.6275442368775512]
Adversarial patch attacks pose a significant threat to the practical deployment of deep learning systems.
In this paper, we investigate the behavior of adversarial patches as anomalies within the distribution of image information.
Our proposed defense mechanism utilizes a clustering-based technique called DBSCAN to isolate anomalous image segments.
arXiv Detail & Related papers (2024-02-09T08:52:47Z) - Cross-Modality Perturbation Synergy Attack for Person Re-identification [66.48494594909123]
Cross-modality person re-identification (ReID) systems are based on RGB images.
Main challenge in cross-modality ReID lies in effectively dealing with visual differences between different modalities.
Existing attack methods have primarily focused on the characteristics of the visible image modality.
This study proposes a universal perturbation attack specifically designed for cross-modality ReID.
arXiv Detail & Related papers (2024-01-18T15:56:23Z) - Class-Conditioned Transformation for Enhanced Robust Image Classification [19.738635819545554]
We propose a novel test-time threat model algorithm that enhances Adrial-versa-Trained (AT) models.
Our method operates through COnditional image transformation and DIstance-based Prediction (CODIP)
The proposed method achieves state-of-the-art results demonstrated through extensive experiments on various models, AT methods, datasets, and attack types.
arXiv Detail & Related papers (2023-03-27T17:28:20Z) - Frequency Domain Model Augmentation for Adversarial Attack [91.36850162147678]
For black-box attacks, the gap between the substitute model and the victim model is usually large.
We propose a novel spectrum simulation attack to craft more transferable adversarial examples against both normally trained and defense models.
arXiv Detail & Related papers (2022-07-12T08:26:21Z) - Diffusion Models for Adversarial Purification [69.1882221038846]
Adrial purification refers to a class of defense methods that remove adversarial perturbations using a generative model.
We propose DiffPure that uses diffusion models for adversarial purification.
Our method achieves the state-of-the-art results, outperforming current adversarial training and adversarial purification methods.
arXiv Detail & Related papers (2022-05-16T06:03:00Z) - An Effective Data Augmentation for Person Re-identification [0.0]
This paper includes Random Grayscale Transformation, Random Grayscale Patch Replacement and their combination.
It is discovered that structural information has a significant effect on the ReID model performance.
Our method achieves a performance improvement of up to 3.3%, achieving the highest retrieval accuracy currently on multiple datasets.
arXiv Detail & Related papers (2021-01-21T10:33:02Z) - Random Transformation of Image Brightness for Adversarial Attack [5.405413975396116]
adversarial examples are crafted by adding small, human-imperceptibles to the original images.
Deep neural networks are vulnerable to adversarial examples, which are crafted by adding small, human-imperceptibles to the original images.
We propose an adversarial example generation method based on this phenomenon, which can be integrated with Fast Gradient Sign Method.
Our method has a higher success rate for black-box attacks than other attack methods based on data augmentation.
arXiv Detail & Related papers (2021-01-12T07:00:04Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp
Adversarial Attacks [154.31827097264264]
Adversarial training is a popular defense strategy against attack threat models with bounded Lp norms.
We propose Dual Manifold Adversarial Training (DMAT) where adversarial perturbations in both latent and image spaces are used in robustifying the model.
Our DMAT improves performance on normal images, and achieves comparable robustness to the standard adversarial training against Lp attacks.
arXiv Detail & Related papers (2020-09-05T06:00:28Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.