Rethinking Person Re-identification from a Projection-on-Prototypes
Perspective
- URL: http://arxiv.org/abs/2308.10717v1
- Date: Mon, 21 Aug 2023 13:38:10 GMT
- Title: Rethinking Person Re-identification from a Projection-on-Prototypes
Perspective
- Authors: Qizao Wang, Xuelin Qian, Bin Li, Yanwei Fu, Xiangyang Xue
- Abstract summary: Person Re-IDentification (Re-ID) as a retrieval task, has achieved tremendous development over the past decade.
We propose a new baseline ProNet, which innovatively reserves the function of the classifier at the inference stage.
Experiments on four benchmarks demonstrate that our proposed ProNet is simple yet effective, and significantly beats previous baselines.
- Score: 84.24742313520811
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Person Re-IDentification (Re-ID) as a retrieval task, has achieved tremendous
development over the past decade. Existing state-of-the-art methods follow an
analogous framework to first extract features from the input images and then
categorize them with a classifier. However, since there is no identity overlap
between training and testing sets, the classifier is often discarded during
inference. Only the extracted features are used for person retrieval via
distance metrics. In this paper, we rethink the role of the classifier in
person Re-ID, and advocate a new perspective to conceive the classifier as a
projection from image features to class prototypes. These prototypes are
exactly the learned parameters of the classifier. In this light, we describe
the identity of input images as similarities to all prototypes, which are then
utilized as more discriminative features to perform person Re-ID. We thereby
propose a new baseline ProNet, which innovatively reserves the function of the
classifier at the inference stage. To facilitate the learning of class
prototypes, both triplet loss and identity classification loss are applied to
features that undergo the projection by the classifier. An improved version of
ProNet++ is presented by further incorporating multi-granularity designs.
Experiments on four benchmarks demonstrate that our proposed ProNet is simple
yet effective, and significantly beats previous baselines. ProNet++ also
achieves competitive or even better results than transformer-based competitors.
Related papers
- Negative Prototypes Guided Contrastive Learning for WSOD [8.102080369924911]
Weakly Supervised Object Detection (WSOD) with only image-level annotation has recently attracted wide attention.
We propose the Negative Prototypes Guided Contrastive learning architecture.
Our proposed method achieves the state-of-the-art performance.
arXiv Detail & Related papers (2024-06-04T08:16:26Z) - PDiscoNet: Semantically consistent part discovery for fine-grained
recognition [62.12602920807109]
We propose PDiscoNet to discover object parts by using only image-level class labels along with priors encouraging the parts to be.
Our results on CUB, CelebA, and PartImageNet show that the proposed method provides substantially better part discovery performance than previous methods.
arXiv Detail & Related papers (2023-09-06T17:19:29Z) - Automatically Discovering Novel Visual Categories with Self-supervised
Prototype Learning [68.63910949916209]
This paper tackles the problem of novel category discovery (NCD), which aims to discriminate unknown categories in large-scale image collections.
We propose a novel adaptive prototype learning method consisting of two main stages: prototypical representation learning and prototypical self-training.
We conduct extensive experiments on four benchmark datasets and demonstrate the effectiveness and robustness of the proposed method with state-of-the-art performance.
arXiv Detail & Related papers (2022-08-01T16:34:33Z) - Weakly Supervised 3D Point Cloud Segmentation via Multi-Prototype
Learning [37.76664203157892]
A fundamental challenge here lies in the large intra-class variations of local geometric structure, resulting in subclasses within a semantic class.
We leverage this intuition and opt for maintaining an individual classifier for each subclass.
Our hypothesis is also verified given the consistent discovery of semantic subclasses at no cost of additional annotations.
arXiv Detail & Related papers (2022-05-06T11:07:36Z) - APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic
Segmentation [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a given query image with only a few labeled support images.
Most advanced solutions exploit a metric learning framework that performs segmentation through matching each query feature to a learned class-specific prototype.
We present an adaptive prototype representation by introducing class-specific and class-agnostic prototypes.
arXiv Detail & Related papers (2021-11-24T04:38:37Z) - Dual Prototypical Contrastive Learning for Few-shot Semantic
Segmentation [55.339405417090084]
We propose a dual prototypical contrastive learning approach tailored to the few-shot semantic segmentation (FSS) task.
The main idea is to encourage the prototypes more discriminative by increasing inter-class distance while reducing intra-class distance in prototype feature space.
We demonstrate that the proposed dual contrastive learning approach outperforms state-of-the-art FSS methods on PASCAL-5i and COCO-20i datasets.
arXiv Detail & Related papers (2021-11-09T08:14:50Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.