Deep Miner: A Deep and Multi-branch Network which Mines Rich and Diverse
Features for Person Re-identification
- URL: http://arxiv.org/abs/2102.09321v1
- Date: Thu, 18 Feb 2021 13:30:23 GMT
- Title: Deep Miner: A Deep and Multi-branch Network which Mines Rich and Diverse
Features for Person Re-identification
- Authors: Abdallah Benzine, Mohamed El Amine Seddik, Julien Desmarais
- Abstract summary: Deep Miner is a method that allows CNNs to "mine" richer and more diverse features about people.
It produces a model that significantly outperforms state-of-the-art (SOTA) re-identification methods.
- Score: 7.068680287596106
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Most recent person re-identification approaches are based on the use of deep
convolutional neural networks (CNNs). These networks, although effective in
multiple tasks such as classification or object detection, tend to focus on the
most discriminative part of an object rather than retrieving all its relevant
features. This behavior penalizes the performance of a CNN for the
re-identification task, since it should identify diverse and fine grained
features. It is then essential to make the network learn a wide variety of
finer characteristics in order to make the re-identification process of people
effective and robust to finer changes. In this article, we introduce Deep
Miner, a method that allows CNNs to "mine" richer and more diverse features
about people for their re-identification. Deep Miner is specifically composed
of three types of branches: a Global branch (G-branch), a Local branch
(L-branch) and an Input-Erased branch (IE-branch). G-branch corresponds to the
initial backbone which predicts global characteristics, while L-branch
retrieves part level resolution features. The IE-branch for its part, receives
partially suppressed feature maps as input thereby allowing the network to
"mine" new features (those ignored by G-branch) as output. For this special
purpose, a dedicated suppression procedure for identifying and removing
features within a given CNN is introduced. This suppression procedure has the
major benefit of being simple, while it produces a model that significantly
outperforms state-of-the-art (SOTA) re-identification methods. Specifically, we
conduct experiments on four standard person re-identification benchmarks and
witness an absolute performance gain up to 6.5% mAP compared to SOTA.
Related papers
- Dynamic Perceiver for Efficient Visual Recognition [87.08210214417309]
We propose Dynamic Perceiver (Dyn-Perceiver) to decouple the feature extraction procedure and the early classification task.
A feature branch serves to extract image features, while a classification branch processes a latent code assigned for classification tasks.
Early exits are placed exclusively within the classification branch, thus eliminating the need for linear separability in low-level features.
arXiv Detail & Related papers (2023-06-20T03:00:22Z) - Unfolding Local Growth Rate Estimates for (Almost) Perfect Adversarial
Detection [22.99930028876662]
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks.
Current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system.
We propose a simple and light-weight detector, which leverages recent findings on the relation between networks' local intrinsic dimensionality (LID) and adversarial attacks.
arXiv Detail & Related papers (2022-12-13T17:51:32Z) - Transformer Based Multi-Grained Features for Unsupervised Person
Re-Identification [9.874360118638918]
We build a dual-branch network architecture based upon a modified Vision Transformer (ViT)
Local tokens output in each branch are reshaped and then uniformly partitioned into multiple stripes to generate part-level features.
Global tokens of two branches are averaged to produce a global feature.
arXiv Detail & Related papers (2022-11-22T13:51:17Z) - SAR Despeckling Using Overcomplete Convolutional Networks [53.99620005035804]
despeckling is an important problem in remote sensing as speckle degrades SAR images.
Recent studies show that convolutional neural networks(CNNs) outperform classical despeckling methods.
This study employs an overcomplete CNN architecture to focus on learning low-level features by restricting the receptive field.
We show that the proposed network improves despeckling performance compared to recent despeckling methods on synthetic and real SAR images.
arXiv Detail & Related papers (2022-05-31T15:55:37Z) - HAT: Hierarchical Aggregation Transformers for Person Re-identification [87.02828084991062]
We take advantages of both CNNs and Transformers for image-based person Re-ID with high performance.
Work is the first to take advantages of both CNNs and Transformers for image-based person Re-ID.
arXiv Detail & Related papers (2021-07-13T09:34:54Z) - Multi-attentional Deepfake Detection [79.80308897734491]
Face forgery by deepfake is widely spread over the internet and has raised severe societal concerns.
We propose a new multi-attentional deepfake detection network. Specifically, it consists of three key components: 1) multiple spatial attention heads to make the network attend to different local parts; 2) textural feature enhancement block to zoom in the subtle artifacts in shallow features; 3) aggregate the low-level textural feature and high-level semantic features guided by the attention maps.
arXiv Detail & Related papers (2021-03-03T13:56:14Z) - Hybrid-Attention Guided Network with Multiple Resolution Features for
Person Re-Identification [30.285126447140254]
We present a novel person re-ID model that fuses high- and low-level embeddings to reduce the information loss caused in learning high-level features.
We also introduce the spatial and channel attention mechanisms in our model, which aims to mine more discriminative features related to the target.
arXiv Detail & Related papers (2020-09-16T08:12:42Z) - Attentive WaveBlock: Complementarity-enhanced Mutual Networks for
Unsupervised Domain Adaptation in Person Re-identification and Beyond [97.25179345878443]
This paper proposes a novel light-weight module, the Attentive WaveBlock (AWB)
AWB can be integrated into the dual networks of mutual learning to enhance the complementarity and further depress noise in the pseudo-labels.
Experiments demonstrate that the proposed method achieves state-of-the-art performance with significant improvements on multiple UDA person re-identification tasks.
arXiv Detail & Related papers (2020-06-11T15:40:40Z) - MagnifierNet: Towards Semantic Adversary and Fusion for Person
Re-identification [38.13515165097505]
MagnifierNet is a triple-branch network which accurately mines details from whole to parts.
"Semantic Fusion Branch" filters out irrelevant noises by selectively fusing semantic region information sequentially.
"Semantic Diversity Loss" removes redundant overlaps across learned semantic representations.
arXiv Detail & Related papers (2020-02-25T15:43:46Z) - Diversity-Achieving Slow-DropBlock Network for Person Re-Identification [11.907116133358022]
A big challenge of person re-identification (Re-ID) using a multi-branch network architecture is to learn diverse features from the ID-labeled dataset.
The 2-branch Batch DropBlock (BDB) network was recently proposed for achieving diversity between the global branch and the feature-dropping branch.
We show that the proposed method performs superior to BDB on popular person Re-ID datasets, including Market-1501, DukeMTMC-reID and CUHK03.
arXiv Detail & Related papers (2020-02-09T08:05:39Z) - Dense Residual Network: Enhancing Global Dense Feature Flow for
Character Recognition [75.4027660840568]
This paper explores how to enhance the local and global dense feature flow by exploiting hierarchical features fully from all the convolution layers.
Technically, we propose an efficient and effective CNN framework, i.e., Fast Dense Residual Network (FDRN) for text recognition.
arXiv Detail & Related papers (2020-01-23T06:55:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.