Center Prediction Loss for Re-identification
- URL: http://arxiv.org/abs/2104.14746v1
- Date: Fri, 30 Apr 2021 03:57:31 GMT
- Title: Center Prediction Loss for Re-identification
- Authors: Lu Yang, Yunlong Wang, Lingqiao Liu, Peng Wang, Lu Chi, Zehuan Yuan,
Changhu Wang and Yanning Zhang
- Abstract summary: We propose a new loss based on center predictivity, that is, a sample must be positioned in a location of the feature space such that from it we can roughly predict the location of the center of same-class samples.
We show that this new loss leads to a more flexible intra-class distribution constraint while ensuring the between-class samples are well-separated.
- Score: 65.58923413172886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The training loss function that enforces certain training sample distribution
patterns plays a critical role in building a re-identification (ReID) system.
Besides the basic requirement of discrimination, i.e., the features
corresponding to different identities should not be mixed, additional
intra-class distribution constraints, such as features from the same identities
should be close to their centers, have been adopted to construct losses.
Despite the advances of various new loss functions, it is still challenging to
strike the balance between the need of reducing the intra-class variation and
allowing certain distribution freedom. In this paper, we propose a new loss
based on center predictivity, that is, a sample must be positioned in a
location of the feature space such that from it we can roughly predict the
location of the center of same-class samples. The prediction error is then
regarded as a loss called Center Prediction Loss (CPL). We show that, without
introducing additional hyper-parameters, this new loss leads to a more flexible
intra-class distribution constraint while ensuring the between-class samples
are well-separated. Extensive experiments on various real-world ReID datasets
show that the proposed loss can achieve superior performance and can also be
complementary to existing losses.
Related papers
- Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - Center Contrastive Loss for Metric Learning [8.433000039153407]
We propose a novel metric learning function called Center Contrastive Loss.
It maintains a class-wise center bank and compares the category centers with the query data points using a contrastive loss.
The proposed loss combines the advantages of both contrastive and classification methods.
arXiv Detail & Related papers (2023-08-01T11:22:51Z) - A data variation robust learning model based on importance sampling [11.285259001286978]
We propose an importance sampling based data variation robust loss (ISloss) for learning problems which minimizes the worst case of loss under the constraint of distribution deviation.
We show that the proposed method is robust under large distribution deviations.
arXiv Detail & Related papers (2023-02-09T04:50:06Z) - Proposal Distribution Calibration for Few-Shot Object Detection [65.19808035019031]
In few-shot object detection (FSOD), the two-step training paradigm is widely adopted to mitigate the severe sample imbalance.
Unfortunately, the extreme data scarcity aggravates the proposal distribution bias, hindering the RoI head from evolving toward novel classes.
We introduce a simple yet effective proposal distribution calibration (PDC) approach to neatly enhance the localization and classification abilities of the RoI head.
arXiv Detail & Related papers (2022-12-15T05:09:11Z) - Learning Compact Features via In-Training Representation Alignment [19.273120635948363]
In each epoch, the true gradient of the loss function is estimated using a mini-batch sampled from the training set.
We propose In-Training Representation Alignment (ITRA) that explicitly aligns feature distributions of two different mini-batches with a matching loss.
We also provide a rigorous analysis of the desirable effects of the matching loss on feature representation learning.
arXiv Detail & Related papers (2022-11-23T22:23:22Z) - Towards In-distribution Compatibility in Out-of-distribution Detection [30.49191281345763]
We propose a new out-of-distribution detection method by adapting both the top-design of deep models and the loss function.
Our method achieves the state-of-the-art out-of-distribution detection performance but also improves the in-distribution accuracy.
arXiv Detail & Related papers (2022-08-29T09:06:15Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - An Equivalence between Loss Functions and Non-Uniform Sampling in
Experience Replay [72.23433407017558]
We show that any loss function evaluated with non-uniformly sampled data can be transformed into another uniformly sampled loss function.
Surprisingly, we find in some environments PER can be replaced entirely by this new loss function without impact to empirical performance.
arXiv Detail & Related papers (2020-07-12T17:45:24Z) - Imbalanced Data Learning by Minority Class Augmentation using Capsule
Adversarial Networks [31.073558420480964]
We propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods.
In our model, generative and discriminative networks play a novel competitive game.
The coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
arXiv Detail & Related papers (2020-04-05T12:36:06Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.