Deep Ranking with Adaptive Margin Triplet Loss
- URL: http://arxiv.org/abs/2107.06187v1
- Date: Tue, 13 Jul 2021 15:37:20 GMT
- Title: Deep Ranking with Adaptive Margin Triplet Loss
- Authors: Mai Lan Ha and Volker Blanz
- Abstract summary: We propose a simple modification from a fixed margin triplet loss to an adaptive margin triplet loss.
Our proposed loss is well suited for rating datasets in which the ratings are continuous values.
- Score: 5.220120772989114
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a simple modification from a fixed margin triplet loss to an
adaptive margin triplet loss. While the original triplet loss is used widely in
classification problems such as face recognition, face re-identification and
fine-grained similarity, our proposed loss is well suited for rating datasets
in which the ratings are continuous values. In contrast to original triplet
loss where we have to sample data carefully, in out method, we can generate
triplets using the whole dataset, and the optimization can still converge
without frequently running into a model collapsing issue. The adaptive margins
only need to be computed once before the training, which is much less expensive
than generating triplets after every epoch as in the fixed margin case. Besides
substantially improved training stability (the proposed model never collapsed
in our experiments compared to a couple of times that the training collapsed on
existing triplet loss), we achieved slightly better performance than the
original triplet loss on various rating datasets and network architectures.
Related papers
- Byzantine-resilient Federated Learning With Adaptivity to Data Heterogeneity [54.145730036889496]
This paper deals with Gradient learning (FL) in the presence of malicious attacks Byzantine data.
A novel Average Algorithm (RAGA) is proposed, which leverages robustness aggregation and can select a dataset.
arXiv Detail & Related papers (2024-03-20T08:15:08Z) - A Recipe for Efficient SBIR Models: Combining Relative Triplet Loss with
Batch Normalization and Knowledge Distillation [3.364554138758565]
Sketch-Based Image Retrieval (SBIR) is a crucial task in multimedia retrieval, where the goal is to retrieve a set of images that match a given sketch query.
We introduce a Relative Triplet Loss (RTL), an adapted triplet loss to overcome limitations through loss weighting based on anchors similarity.
We propose a straightforward approach to train small models efficiently with a marginal loss of accuracy through knowledge distillation.
arXiv Detail & Related papers (2023-05-30T12:41:04Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - Robust Outlier Rejection for 3D Registration with Variational Bayes [70.98659381852787]
We develop a novel variational non-local network-based outlier rejection framework for robust alignment.
We propose a voting-based inlier searching strategy to cluster the high-quality hypothetical inliers for transformation estimation.
arXiv Detail & Related papers (2023-04-04T03:48:56Z) - Self-Supervised Monocular Depth Estimation: Solving the Edge-Fattening
Problem [39.82550656611876]
Triplet loss, popular for metric learning, has made a great success in many computer vision tasks.
We show two drawbacks of the raw triplet loss in MDE and demonstrate our problem-driven redesigns.
arXiv Detail & Related papers (2022-10-02T03:08:59Z) - Supervised Contrastive Learning to Classify Paranasal Anomalies in the
Maxillary Sinus [43.850343556811275]
Using deep learning techniques, anomalies in the paranasal sinus system can be detected automatically in MRI images.
Existing deep learning methods in paranasal anomaly classification have been used to diagnose at most one anomaly.
We propose a novel learning paradigm that combines contrastive loss and cross-entropy loss.
arXiv Detail & Related papers (2022-09-05T12:31:28Z) - Label Distributionally Robust Losses for Multi-class Classification:
Consistency, Robustness and Adaptivity [55.29408396918968]
We study a family of loss functions named label-distributionally robust (LDR) losses for multi-class classification.
Our contributions include both consistency and robustness by establishing top-$k$ consistency of LDR losses for multi-class classification.
We propose a new adaptive LDR loss that automatically adapts the individualized temperature parameter to the noise degree of class label of each instance.
arXiv Detail & Related papers (2021-12-30T00:27:30Z) - Risk Minimization from Adaptively Collected Data: Guarantees for
Supervised and Policy Learning [57.88785630755165]
Empirical risk minimization (ERM) is the workhorse of machine learning, but its model-agnostic guarantees can fail when we use adaptively collected data.
We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class.
For policy learning, we provide rate-optimal regret guarantees that close an open gap in the existing literature whenever exploration decays to zero.
arXiv Detail & Related papers (2021-06-03T09:50:13Z) - Strong but Simple Baseline with Dual-Granularity Triplet Loss for
Visible-Thermal Person Re-Identification [9.964287254346976]
We propose a conceptually simple and effective dual-granularity triplet loss for visible-thermal person re-identification (VT-ReID)
Our proposed dual-granularity triplet loss well organizes the sample-based triplet loss and center-based triplet loss in a hierarchical fine to coarse granularity manner.
arXiv Detail & Related papers (2020-12-09T12:43:34Z) - Beyond Triplet Loss: Meta Prototypical N-tuple Loss for Person
Re-identification [118.72423376789062]
We introduce a multi-class classification loss, i.e., N-tuple loss, to jointly consider multiple (N) instances for per-query optimization.
With the multi-class classification incorporated, our model achieves the state-of-the-art performance on the benchmark person ReID datasets.
arXiv Detail & Related papers (2020-06-08T23:34:08Z) - Supervised Contrastive Learning [42.27949000093086]
We extend the self-supervised batch contrastive approach to the fully-supervised setting.
We analyze two possible versions of the supervised contrastive (SupCon) loss, identifying the best-performing formulation of the loss.
On ResNet-200, we achieve top-1 accuracy of 81.4% on the ImageNet dataset, which is 0.8% above the best number reported for this architecture.
arXiv Detail & Related papers (2020-04-23T17:58:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.