CGUA: Context-Guided and Unpaired-Assisted Weakly Supervised Person
Search
- URL: http://arxiv.org/abs/2203.14307v1
- Date: Sun, 27 Mar 2022 13:57:30 GMT
- Title: CGUA: Context-Guided and Unpaired-Assisted Weakly Supervised Person
Search
- Authors: Chengyou Jia, Minnan Luo, Caixia Yan, Xiaojun Chang, Qinghua Zheng
- Abstract summary: We introduce a Context-Guided and Unpaired-Assisted (CGUA) weakly supervised person search framework.
Specifically, we propose a novel Context-Guided Cluster (CGC) algorithm to leverage context information in the clustering process.
Our method achieves comparable or better performance to the state-of-the-art supervised methods by leveraging more diverse unlabeled data.
- Score: 54.106662998673514
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, weakly supervised person search is proposed to discard
human-annotated identities and train the model with only bounding box
annotations. A natural way to solve this problem is to separate it into
detection and unsupervised re-identification (Re-ID) steps. However, in this
way, two important clues in unconstrained scene images are ignored. On the one
hand, existing unsupervised Re-ID models only leverage cropped images from
scene images but ignore its rich context information. On the other hand, there
are numerous unpaired persons in real-world scene images. Directly dealing with
them as independent identities leads to the long-tail effect, while completely
discarding them can result in serious information loss. In light of these
challenges, we introduce a Context-Guided and Unpaired-Assisted (CGUA) weakly
supervised person search framework. Specifically, we propose a novel
Context-Guided Cluster (CGC) algorithm to leverage context information in the
clustering process and an Unpaired-Assisted Memory (UAM) unit to distinguish
unpaired and paired persons by pushing them away. Extensive experiments
demonstrate that the proposed approach can surpass the state-of-the-art weakly
supervised methods by a large margin (more than 5% mAP on CUHK-SYSU). Moreover,
our method achieves comparable or better performance to the state-of-the-art
supervised methods by leveraging more diverse unlabeled data. Codes and models
will be released soon.
Related papers
- Keypoint Promptable Re-Identification [76.31113049256375]
Occluded Person Re-Identification (ReID) is a metric learning task that involves matching occluded individuals based on their appearance.
We introduce Keypoint Promptable ReID (KPR), a novel formulation of the ReID problem that explicitly complements the input bounding box with a set of semantic keypoints.
We release custom keypoint labels for four popular ReID benchmarks. Experiments on person retrieval, but also on pose tracking, demonstrate that our method systematically surpasses previous state-of-the-art approaches.
arXiv Detail & Related papers (2024-07-25T15:20:58Z) - Hard-sample Guided Hybrid Contrast Learning for Unsupervised Person
Re-Identification [8.379286663107845]
Unsupervised person re-identification (Re-ID) is a promising and very challenging research problem in computer vision.
We propose a Hard-sample Guided Hybrid Contrast Learning (HHCL) approach combining cluster-level loss with instance-level loss for unsupervised person Re-ID.
Experiments on two popular large-scale Re-ID benchmarks demonstrate that our HHCL outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2021-09-25T10:43:37Z) - Unsupervised Person Re-identification via Simultaneous Clustering and
Consistency Learning [22.008371113710137]
We design a pretext task for unsupervised re-ID by learning visual consistency from still images and temporal consistency during training process.
We optimize the model by grouping the two encoded views into same cluster, thus enhancing the visual consistency between views.
arXiv Detail & Related papers (2021-04-01T02:10:42Z) - Camera-aware Proxies for Unsupervised Person Re-Identification [60.26031011794513]
This paper tackles the purely unsupervised person re-identification (Re-ID) problem that requires no annotations.
We propose to split each single cluster into multiple proxies and each proxy represents the instances coming from the same camera.
Based on the camera-aware proxies, we design both intra- and inter-camera contrastive learning components for our Re-ID model.
arXiv Detail & Related papers (2020-12-19T12:37:04Z) - Do Not Disturb Me: Person Re-identification Under the Interference of
Other Pedestrians [97.45805377769354]
This paper presents a novel deep network termed Pedestrian-Interference Suppression Network (PISNet)
PISNet leverages a Query-Guided Attention Block (QGAB) to enhance the feature of the target in the gallery, under the guidance of the query.
Our method is evaluated on two new pedestrian-interference datasets and the results show that the proposed method performs favorably against existing Re-ID methods.
arXiv Detail & Related papers (2020-08-16T17:45:14Z) - Unsupervised Person Re-identification via Softened Similarity Learning [122.70472387837542]
Person re-identification (re-ID) is an important topic in computer vision.
This paper studies the unsupervised setting of re-ID, which does not require any labeled information.
Experiments on two image-based and video-based datasets demonstrate state-of-the-art performance.
arXiv Detail & Related papers (2020-04-07T17:16:41Z) - Intra-Camera Supervised Person Re-Identification [87.88852321309433]
We propose a novel person re-identification paradigm based on an idea of independent per-camera identity annotation.
This eliminates the most time-consuming and tedious inter-camera identity labelling process.
We formulate a Multi-tAsk mulTi-labEl (MATE) deep learning method for Intra-Camera Supervised (ICS) person re-id.
arXiv Detail & Related papers (2020-02-12T15:26:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.