Beyond Triplet Loss: Meta Prototypical N-tuple Loss for Person
Re-identification
- URL: http://arxiv.org/abs/2006.04991v2
- Date: Fri, 24 Sep 2021 08:55:05 GMT
- Title: Beyond Triplet Loss: Meta Prototypical N-tuple Loss for Person
Re-identification
- Authors: Zhizheng Zhang, Cuiling Lan, Wenjun Zeng, Zhibo Chen, Shih-Fu Chang
- Abstract summary: We introduce a multi-class classification loss, i.e., N-tuple loss, to jointly consider multiple (N) instances for per-query optimization.
With the multi-class classification incorporated, our model achieves the state-of-the-art performance on the benchmark person ReID datasets.
- Score: 118.72423376789062
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Person Re-identification (ReID) aims at matching a person of interest across
images. In convolutional neural network (CNN) based approaches, loss design
plays a vital role in pulling closer features of the same identity and pushing
far apart features of different identities. In recent years, triplet loss
achieves superior performance and is predominant in ReID. However, triplet loss
considers only three instances of two classes in per-query optimization (with
an anchor sample as query) and it is actually equivalent to a two-class
classification. There is a lack of loss design which enables the joint
optimization of multiple instances (of multiple classes) within per-query
optimization for person ReID. In this paper, we introduce a multi-class
classification loss, i.e., N-tuple loss, to jointly consider multiple (N)
instances for per-query optimization. This in fact aligns better with the ReID
test/inference process, which conducts the ranking/comparisons among multiple
instances. Furthermore, for more efficient multi-class classification, we
propose a new meta prototypical N-tuple loss. With the multi-class
classification incorporated, our model achieves the state-of-the-art
performance on the benchmark person ReID datasets.
Related papers
- Deep Similarity Learning Loss Functions in Data Transformation for Class
Imbalance [2.693342141713236]
In this paper, we use deep neural networks to train new representations of multi-class data.
Our proposal modifies the distribution of features, i.e. the positions of examples in the learned embedded representation, and it does not modify the class sizes.
Experiments with popular multi-class imbalanced benchmark data sets and three classifiers showed the advantage of the proposed approach.
arXiv Detail & Related papers (2023-12-16T23:10:09Z) - Rethinking Person Re-identification from a Projection-on-Prototypes
Perspective [84.24742313520811]
Person Re-IDentification (Re-ID) as a retrieval task, has achieved tremendous development over the past decade.
We propose a new baseline ProNet, which innovatively reserves the function of the classifier at the inference stage.
Experiments on four benchmarks demonstrate that our proposed ProNet is simple yet effective, and significantly beats previous baselines.
arXiv Detail & Related papers (2023-08-21T13:38:10Z) - SuSana Distancia is all you need: Enforcing class separability in metric
learning via two novel distance-based loss functions for few-shot image
classification [0.9236074230806579]
We propose two loss functions which consider the importance of the embedding vectors by looking at the intra-class and inter-class distance between the few data.
Our results show a significant improvement in accuracy in the miniImagenNet benchmark compared to other metric-based few-shot learning methods by a margin of 2%.
arXiv Detail & Related papers (2023-05-15T23:12:09Z) - Adaptive Sparse Pairwise Loss for Object Re-Identification [25.515107212575636]
Pairwise losses play an important role in training a strong ReID network.
We propose a novel loss paradigm termed Sparse Pairwise (SP) loss.
We show that SP loss and its adaptive variant AdaSP loss outperform other pairwise losses.
arXiv Detail & Related papers (2023-03-31T17:59:44Z) - Benchmarking Deep AUROC Optimization: Loss Functions and Algorithmic
Choices [37.559461866831754]
We benchmark a variety of loss functions with different algorithmic choices for deep AUROC optimization problem.
We highlight the essential choices such as positive sampling rate, regularization, normalization/activation, and weights.
Our findings show that although Adam-type method is more competitive from training perspective, but it does not outperform others from testing perspective.
arXiv Detail & Related papers (2022-03-27T00:47:00Z) - Unsupervised and self-adaptative techniques for cross-domain person
re-identification [82.54691433502335]
Person Re-Identification (ReID) across non-overlapping cameras is a challenging task.
Unsupervised Domain Adaptation (UDA) is a promising alternative, as it performs feature-learning adaptation from a model trained on a source to a target domain without identity-label annotation.
In this paper, we propose a novel UDA-based ReID method that takes advantage of triplets of samples created by a new offline strategy.
arXiv Detail & Related papers (2021-03-21T23:58:39Z) - Learning by Minimizing the Sum of Ranked Range [58.24935359348289]
We introduce the sum of ranked range (SoRR) as a general approach to form learning objectives.
A ranked range is a consecutive sequence of sorted values of a set of real numbers.
We explore two applications in machine learning of the minimization of the SoRR framework, namely the AoRR aggregate loss for binary classification and the TKML individual loss for multi-label/multi-class classification.
arXiv Detail & Related papers (2020-10-05T01:58:32Z) - A Unified Framework of Surrogate Loss by Refactoring and Interpolation [65.60014616444623]
We introduce UniLoss, a unified framework to generate surrogate losses for training deep networks with gradient descent.
We validate the effectiveness of UniLoss on three tasks and four datasets.
arXiv Detail & Related papers (2020-07-27T21:16:51Z) - Equalization Loss for Long-Tailed Object Recognition [109.91045951333835]
State-of-the-art object detection methods still perform poorly on large vocabulary and long-tailed datasets.
We propose a simple but effective loss, named equalization loss, to tackle the problem of long-tailed rare categories.
Our method achieves AP gains of 4.1% and 4.8% for the rare and common categories on the challenging LVIS benchmark.
arXiv Detail & Related papers (2020-03-11T09:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.