Robust Person Re-identification with Multi-Modal Joint Defence
- URL: http://arxiv.org/abs/2111.09571v1
- Date: Thu, 18 Nov 2021 08:13:49 GMT
- Title: Robust Person Re-identification with Multi-Modal Joint Defence
- Authors: Yunpeng Gong and Lifei Chen
- Abstract summary: Existing work mainly relies on adversarial training for metric defense.
We propose targeted methods for metric attacks and defence methods.
In terms of metric defenses, we propose a joint defense method which includes two parts of proactive defense and passive defense.
- Score: 1.441703014203756
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Person Re-identification (ReID) system based on metric learning has been
proved to inherit the vulnerability of deep neural networks (DNNs), which are
easy to be fooled by adversarail metric attacks. Existing work mainly relies on
adversarial training for metric defense, and more methods have not been fully
studied. By exploring the impact of attacks on the underlying features, we
propose targeted methods for metric attacks and defence methods. In terms of
metric attack, we use the local color deviation to construct the intra-class
variation of the input to attack color features. In terms of metric defenses,
we propose a joint defense method which includes two parts of proactive defense
and passive defense. Proactive defense helps to enhance the robustness of the
model to color variations and the learning of structure relations across
multiple modalities by constructing different inputs from multimodal images,
and passive defense exploits the invariance of structural features in a
changing pixel space by circuitous scaling to preserve structural features
while eliminating some of the adversarial noise. Extensive experiments
demonstrate that the proposed joint defense compared with the existing
adversarial metric defense methods which not only against multiple attacks at
the same time but also has not significantly reduced the generalization
capacity of the model. The code is available at
https://github.com/finger-monkey/multi-modal_joint_defence.
Related papers
- Improving Adversarial Robustness via Decoupled Visual Representation Masking [65.73203518658224]
In this paper, we highlight two novel properties of robust features from the feature distribution perspective.
We find that state-of-the-art defense methods aim to address both of these mentioned issues well.
Specifically, we propose a simple but effective defense based on decoupled visual representation masking.
arXiv Detail & Related papers (2024-06-16T13:29:41Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Understanding the Robustness of Randomized Feature Defense Against
Query-Based Adversarial Attacks [23.010308600769545]
Deep neural networks are vulnerable to adversarial examples that find samples close to the original image but can make the model misclassify.
We propose a simple and lightweight defense against black-box attacks by adding random noise to hidden features at intermediate layers of the model at inference time.
Our method effectively enhances the model's resilience against both score-based and decision-based black-box attacks.
arXiv Detail & Related papers (2023-10-01T03:53:23Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Internal Wasserstein Distance for Adversarial Attack and Defense [40.27647699862274]
We propose an internal Wasserstein distance (IWD) to measure image similarity between a sample and its adversarial example.
We develop a novel attack method by capturing the distribution of patches in original samples.
We also build a new defense method that seeks to learn robust models to defend against unseen adversarial examples.
arXiv Detail & Related papers (2021-03-13T02:08:02Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.