Joint Discriminative and Metric Embedding Learning for Person
Re-Identification
- URL: http://arxiv.org/abs/2212.14107v1
- Date: Wed, 28 Dec 2022 22:08:42 GMT
- Title: Joint Discriminative and Metric Embedding Learning for Person
Re-Identification
- Authors: Sinan Sabri, Zaigham Randhawa, Gianfranco Doretto
- Abstract summary: Person re-identification is a challenging task because of the high intra-class variance induced by the unrestricted nuisance factors of variations.
Recent approaches postulate that powerful architectures have the capacity to learn feature representations invariant to nuisance factors.
- Score: 8.137833258504381
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Person re-identification is a challenging task because of the high
intra-class variance induced by the unrestricted nuisance factors of variations
such as pose, illumination, viewpoint, background, and sensor noise. Recent
approaches postulate that powerful architectures have the capacity to learn
feature representations invariant to nuisance factors, by training them with
losses that minimize intra-class variance and maximize inter-class separation,
without modeling nuisance factors explicitly. The dominant approaches use
either a discriminative loss with margin, like the softmax loss with the
additive angular margin, or a metric learning loss, like the triplet loss with
batch hard mining of triplets. Since the softmax imposes feature normalization,
it limits the gradient flow supervising the feature embedding. We address this
by joining the losses and leveraging the triplet loss as a proxy for the
missing gradients. We further improve invariance to nuisance factors by adding
the discriminative task of predicting attributes. Our extensive evaluation
highlights that when only a holistic representation is learned, we consistently
outperform the state-of-the-art on the three most challenging datasets. Such
representations are easier to deploy in practical systems. Finally, we found
that joining the losses removes the requirement for having a margin in the
softmax loss while increasing performance.
Related papers
- LEARN: An Invex Loss for Outlier Oblivious Robust Online Optimization [56.67706781191521]
An adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown to the learner.
We present a robust online rounds optimization framework, where an adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown.
arXiv Detail & Related papers (2024-08-12T17:08:31Z) - Large Margin Discriminative Loss for Classification [3.3975558777609915]
We introduce a novel discriminative loss function with large margin in the context of Deep Learning.
This loss boosts the discriminative power of neural nets, represented by intra-class compactness and inter-class separability.
arXiv Detail & Related papers (2024-05-28T18:10:45Z) - Noise-Robust Loss Functions: Enhancing Bounded Losses for Large-Scale Noisy Data Learning [0.0]
Large annotated datasets inevitably contain noisy labels, which poses a major challenge for training deep neural networks as they easily memorize the labels.
Noise-robust loss functions have emerged as a notable strategy to counteract this issue, but it remains challenging to create a robust loss function which is not susceptible to underfitting.
We propose a novel method denoted as logit bias, which adds a real number $epsilon$ to the logit at the position of the correct class.
arXiv Detail & Related papers (2023-06-08T18:38:55Z) - Learning Towards the Largest Margins [83.7763875464011]
Loss function should promote the largest possible margins for both classes and samples.
Not only does this principled framework offer new perspectives to understand and interpret existing margin-based losses, but it can guide the design of new tools.
arXiv Detail & Related papers (2022-06-23T10:03:03Z) - Do Lessons from Metric Learning Generalize to Image-Caption Retrieval? [67.45267657995748]
The triplet loss with semi-hard negatives has become the de facto choice for image-caption retrieval (ICR) methods that are optimized from scratch.
Recent progress in metric learning has given rise to new loss functions that outperform the triplet loss on tasks such as image retrieval and representation learning.
We ask whether these findings generalize to the setting of ICR by comparing three loss functions on two ICR methods.
arXiv Detail & Related papers (2022-02-14T15:18:00Z) - Do We Need to Penalize Variance of Losses for Learning with Label Noise? [91.38888889609002]
We find that the variance should be increased for the problem of learning with noisy labels.
By exploiting the label noise transition matrix, regularizers can be easily designed to reduce the variance of losses.
Empirically, the proposed method by increasing the variance of losses significantly improves the generalization ability of baselines on both synthetic and real-world datasets.
arXiv Detail & Related papers (2022-01-30T06:19:08Z) - Frequency-aware Discriminative Feature Learning Supervised by
Single-Center Loss for Face Forgery Detection [89.43987367139724]
Face forgery detection is raising ever-increasing interest in computer vision.
Recent works have reached sound achievements, but there are still unignorable problems.
A novel frequency-aware discriminative feature learning framework is proposed in this paper.
arXiv Detail & Related papers (2021-03-16T14:17:17Z) - Adaptive Weighted Discriminator for Training Generative Adversarial
Networks [11.68198403603969]
We introduce a new family of discriminator loss functions that adopts a weighted sum of real and fake parts.
Our method can be potentially applied to any discriminator model with a loss that is a sum of the real and fake parts.
arXiv Detail & Related papers (2020-12-05T23:55:42Z) - Loss Function Search for Face Recognition [75.79325080027908]
We develop a reward-guided search method to automatically obtain the best candidate.
Experimental results on a variety of face recognition benchmarks have demonstrated the effectiveness of our method.
arXiv Detail & Related papers (2020-07-10T03:40:10Z) - Disentanglement for Discriminative Visual Recognition [7.954325638519141]
This chapter systematically summarize the detrimental factors as task-relevant/irrelevant semantic variations and unspecified latent variation.
The better FER performance can be achieved by combining the deep metric loss and softmax loss in a unified two fully connected layer branches framework.
The framework achieves top performance on a serial of tasks, including lighting, makeup, disguise-tolerant face recognition and facial attributes recognition.
arXiv Detail & Related papers (2020-06-14T06:10:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.