Erasing, Transforming, and Noising Defense Network for Occluded Person
Re-Identification
- URL: http://arxiv.org/abs/2307.07187v3
- Date: Sun, 26 Nov 2023 10:32:54 GMT
- Title: Erasing, Transforming, and Noising Defense Network for Occluded Person
Re-Identification
- Authors: Neng Dong, Liyan Zhang, Shuanglin Yan, Hao Tang and Jinhui Tang
- Abstract summary: We propose Erasing, Transforming, and Noising Defense Network (ETNDNet) to solve occluded person re-ID.
In the proposed ETNDNet, we randomly erase the feature map to create an adversarial representation with incomplete information.
Thirdly, we perturb the feature map with random values to address noisy information introduced by obstacles and non-target pedestrians.
- Score: 36.91680117072686
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Occlusion perturbation presents a significant challenge in person
re-identification (re-ID), and existing methods that rely on external visual
cues require additional computational resources and only consider the issue of
missing information caused by occlusion. In this paper, we propose a simple yet
effective framework, termed Erasing, Transforming, and Noising Defense Network
(ETNDNet), which treats occlusion as a noise disturbance and solves occluded
person re-ID from the perspective of adversarial defense. In the proposed
ETNDNet, we introduce three strategies: Firstly, we randomly erase the feature
map to create an adversarial representation with incomplete information,
enabling adversarial learning of identity loss to protect the re-ID system from
the disturbance of missing information. Secondly, we introduce random
transformations to simulate the position misalignment caused by occlusion,
training the extractor and classifier adversarially to learn robust
representations immune to misaligned information. Thirdly, we perturb the
feature map with random values to address noisy information introduced by
obstacles and non-target pedestrians, and employ adversarial gaming in the
re-ID system to enhance its resistance to occlusion noise. Without bells and
whistles, ETNDNet has three key highlights: (i) it does not require any
external modules with parameters, (ii) it effectively handles various issues
caused by occlusion from obstacles and non-target pedestrians, and (iii) it
designs the first GAN-based adversarial defense paradigm for occluded person
re-ID. Extensive experiments on five public datasets fully demonstrate the
effectiveness, superiority, and practicality of the proposed ETNDNet. The code
will be released at \url{https://github.com/nengdong96/ETNDNet}.
Related papers
- DDRN:a Data Distribution Reconstruction Network for Occluded Person Re-Identification [5.7703191981015305]
We propose a generative model that leverages data distribution to filter out irrelevant details.
On the Occluded-Duke dataset, we achieved a mAP of 62.4% (+1.1%) and a rank-1 accuracy of 71.3% (+0.6%)
arXiv Detail & Related papers (2024-10-09T06:52:05Z) - A Deep Hierarchical Feature Sparse Framework for Occluded Person
Re-Identification [0.0]
A speed-up person ReID framework named SUReID is proposed to mitigate occlusion interference while speeding up inference.
Experimental results show that the SUReID achieves superior performance with surprisingly fast inference.
arXiv Detail & Related papers (2024-01-15T04:51:39Z) - Learning Cross-modality Information Bottleneck Representation for
Heterogeneous Person Re-Identification [61.49219876388174]
Visible-Infrared person re-identification (VI-ReID) is an important and challenging task in intelligent video surveillance.
Existing methods mainly focus on learning a shared feature space to reduce the modality discrepancy between visible and infrared modalities.
We present a novel mutual information and modality consensus network, namely CMInfoNet, to extract modality-invariant identity features.
arXiv Detail & Related papers (2023-08-29T06:55:42Z) - Adversarial Self-Attack Defense and Spatial-Temporal Relation Mining for
Visible-Infrared Video Person Re-Identification [24.9205771457704]
The paper proposes a new visible-infrared video person re-ID method from a novel perspective, i.e., adversarial self-attack defense and spatial-temporal relation mining.
The proposed method exhibits compelling performance on large-scale cross-modality video datasets.
arXiv Detail & Related papers (2023-07-08T05:03:10Z) - Attention Disturbance and Dual-Path Constraint Network for Occluded
Person Re-identification [36.86516784815214]
We propose a transformer-based Attention Disturbance and Dual-Path Constraint Network (ADP) to enhance the generalization of attention networks.
To imitate real-world obstacles, we introduce an Attention Disturbance Mask (ADM) module that generates an offensive noise.
We also develop a Dual-Path Constraint Module (DPC) that can obtain preferable supervision information from holistic images.
arXiv Detail & Related papers (2023-03-20T09:56:35Z) - Learning Feature Recovery Transformer for Occluded Person
Re-identification [71.18476220969647]
We propose a new approach called Feature Recovery Transformer (FRT) to address the two challenges simultaneously.
To reduce the interference of the noise during feature matching, we mainly focus on visible regions that appear in both images and develop a visibility graph to calculate the similarity.
In terms of the second challenge, based on the developed graph similarity, for each query image, we propose a recovery transformer that exploits the feature sets of its $k$-nearest neighbors in the gallery to recover the complete features.
arXiv Detail & Related papers (2023-01-05T02:36:16Z) - Occluded Person Re-Identification via Relational Adaptive Feature
Correction Learning [8.015703163954639]
Occluded person re-identification (Re-ID) in images captured by multiple cameras is challenging because the target person is occluded by pedestrians or objects.
Most existing methods utilize the off-the-shelf pose or parsing networks as pseudo labels, which are prone to error.
We propose a novel Occlusion Correction Network (OCNet) that corrects features through relational-weight learning and obtains diverse and representative features without using external networks.
arXiv Detail & Related papers (2022-12-09T07:48:47Z) - Dual Spoof Disentanglement Generation for Face Anti-spoofing with Depth
Uncertainty Learning [54.15303628138665]
Face anti-spoofing (FAS) plays a vital role in preventing face recognition systems from presentation attacks.
Existing face anti-spoofing datasets lack diversity due to the insufficient identity and insignificant variance.
We propose Dual Spoof Disentanglement Generation framework to tackle this challenge by "anti-spoofing via generation"
arXiv Detail & Related papers (2021-12-01T15:36:59Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Deep Spatial Gradient and Temporal Depth Learning for Face Anti-spoofing [61.82466976737915]
Depth supervised learning has been proven as one of the most effective methods for face anti-spoofing.
We propose a new approach to detect presentation attacks from multiple frames based on two insights.
The proposed approach achieves state-of-the-art results on five benchmark datasets.
arXiv Detail & Related papers (2020-03-18T06:11:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.