Black Re-ID: A Head-shoulder Descriptor for the Challenging Problem of
Person Re-Identification
- URL: http://arxiv.org/abs/2008.08528v1
- Date: Wed, 19 Aug 2020 16:10:36 GMT
- Title: Black Re-ID: A Head-shoulder Descriptor for the Challenging Problem of
Person Re-Identification
- Authors: Boqiang Xu, Lingxiao He, Xingyu Liao, Wu Liu, Zhenan Sun, Tao Mei
- Abstract summary: Person re-identification (Re-ID) aims at retrieving an input person image from a set of images captured by multiple cameras.
It is common for people to wear black clothes or be captured by surveillance systems in low light illumination, in which cases the attributes of the clothing are severely missing.
We propose to exploit head-shoulder features to assist person Re-ID.
- Score: 98.08953310034929
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Person re-identification (Re-ID) aims at retrieving an input person image
from a set of images captured by multiple cameras. Although recent Re-ID
methods have made great success, most of them extract features in terms of the
attributes of clothing (e.g., color, texture). However, it is common for people
to wear black clothes or be captured by surveillance systems in low light
illumination, in which cases the attributes of the clothing are severely
missing. We call this problem the Black Re-ID problem. To solve this problem,
rather than relying on the clothing information, we propose to exploit
head-shoulder features to assist person Re-ID. The head-shoulder adaptive
attention network (HAA) is proposed to learn the head-shoulder feature and an
innovative ensemble method is designed to enhance the generalization of our
model. Given the input person image, the ensemble method would focus on the
head-shoulder feature by assigning a larger weight if the individual insides
the image is in black clothing. Due to the lack of a suitable benchmark dataset
for studying the Black Re-ID problem, we also contribute the first Black-reID
dataset, which contains 1274 identities in training set. Extensive evaluations
on the Black-reID, Market1501 and DukeMTMC-reID datasets show that our model
achieves the best result compared with the state-of-the-art Re-ID methods on
both Black and conventional Re-ID problems. Furthermore, our method is also
proved to be effective in dealing with person Re-ID in similar clothing. Our
code and dataset are avaliable on https://github.com/xbq1994/.
Related papers
- CCPA: Long-term Person Re-Identification via Contrastive Clothing and
Pose Augmentation [2.1756081703276]
Long-term Person Re-Identification aims at matching an individual across cameras after a long period of time.
We propose CCPA: Contrastive Clothing and Pose Augmentation framework for LRe-ID.
arXiv Detail & Related papers (2024-02-22T11:16:34Z) - Combining Two Adversarial Attacks Against Person Re-Identification
Systems [0.0]
We focus on adversarial attacks on Re-ID systems, which can be a critical threat to the performance of these systems.
We combine the use of two types of adversarial attacks, P-FGSM and Deep Mis-Ranking, applied to two popular Re-ID models.
The best result demonstrates a decrease of 3.36% in the Rank-10 metric for ReID applied to CUHK03.
arXiv Detail & Related papers (2023-09-24T22:22:29Z) - Body Part-Based Representation Learning for Occluded Person
Re-Identification [102.27216744301356]
Occluded person re-identification (ReID) is a person retrieval task which aims at matching occluded person images with holistic ones.
Part-based methods have been shown beneficial as they offer fine-grained information and are well suited to represent partially visible human bodies.
We propose BPBreID, a body part-based ReID model for solving the above issues.
arXiv Detail & Related papers (2022-11-07T16:48:41Z) - Clothes-Changing Person Re-identification with RGB Modality Only [102.44387094119165]
We propose a Clothes-based Adrial Loss (CAL) to mine clothes-irrelevant features from the original RGB images.
Videos contain richer appearance and additional temporal information, which can be used to model propertemporal patterns.
arXiv Detail & Related papers (2022-04-14T11:38:28Z) - Unsupervised Pre-training for Person Re-identification [90.98552221699508]
We present a large scale unlabeled person re-identification (Re-ID) dataset "LUPerson"
We make the first attempt of performing unsupervised pre-training for improving the generalization ability of the learned person Re-ID feature representation.
arXiv Detail & Related papers (2020-12-07T14:48:26Z) - PoseTrackReID: Dataset Description [97.7241689753353]
Pose information is helpful to disentangle useful feature information from background or occlusion noise.
With PoseTrackReID, we want to bridge the gap between person re-ID and multi-person pose tracking.
This dataset provides a good benchmark for current state-of-the-art methods on multi-frame person re-ID.
arXiv Detail & Related papers (2020-11-12T07:44:25Z) - Long-Term Cloth-Changing Person Re-identification [154.57752691285046]
Person re-identification (Re-ID) aims to match a target person across camera views at different locations and times.
Existing Re-ID studies focus on the short-term cloth-consistent setting, under which a person re-appears in different camera views with the same outfit.
In this work, we focus on a much more difficult yet practical setting where person matching is conducted over long-duration, e.g., over days and months.
arXiv Detail & Related papers (2020-05-26T11:27:21Z) - Learning Shape Representations for Clothing Variations in Person
Re-Identification [34.559050607889816]
Person re-identification (re-ID) aims to recognize instances of the same person contained in multiple images taken across different cameras.
We propose a novel representation learning model which is able to generate a body shape feature representation without being affected by clothing color or patterns.
Case-Net learns a representation of identity that depends only on body shape via adversarial learning and feature disentanglement.
arXiv Detail & Related papers (2020-03-16T17:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.