A Quadruplet Loss for Enforcing Semantically Coherent Embeddings in
Multi-output Classification Problems
- URL: http://arxiv.org/abs/2002.11644v3
- Date: Fri, 20 Mar 2020 16:53:14 GMT
- Title: A Quadruplet Loss for Enforcing Semantically Coherent Embeddings in
Multi-output Classification Problems
- Authors: Hugo Proen\c{c}a, Ehsan Yaghoubi and Pendar Alirezazadeh
- Abstract summary: This paper describes one objective function for learning semantically coherent feature embeddings in multi-output classification problems.
We consider the problems of identity retrieval and soft biometrics labelling in visual surveillance environments.
- Score: 5.972927416266617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper describes one objective function for learning semantically
coherent feature embeddings in multi-output classification problems, i.e., when
the response variables have dimension higher than one. In particular, we
consider the problems of identity retrieval and soft biometrics labelling in
visual surveillance environments, which have been attracting growing interests.
Inspired by the triplet loss [34] function, we propose a generalization that:
1) defines a metric that considers the number of agreeing labels between pairs
of elements; and 2) disregards the notion of anchor, replacing d(A1, A2) <
d(A1, B) by d(A, B) < d(C, D), for A, B, C, D distance constraints, according
to the number of agreeing labels between pairs. As the triplet loss
formulation, our proposal also privileges small distances between positive
pairs, but at the same time explicitly enforces that the distance between other
pairs corresponds directly to their similarity in terms of agreeing labels.
This yields feature embeddings with a strong correspondence between the classes
centroids and their semantic descriptions, i.e., where elements are closer to
others that share some of their labels than to elements with fully disjoint
labels membership. As practical effect, the proposed loss can be seen as
particularly suitable for performing joint coarse (soft label) + fine (ID)
inference, based on simple rules as k-neighbours, which is a novelty with
respect to previous related loss functions. Also, in opposition to its triplet
counterpart, the proposed loss is agnostic with regard to any demanding
criteria for mining learning instances (such as the semi-hard pairs). Our
experiments were carried out in five different datasets (BIODI, LFW, IJB-A,
Megaface and PETA) and validate our assumptions, showing highly promising
results.
Related papers
- Semantic F1 Scores: Fair Evaluation Under Fuzzy Class Boundaries [65.89202599399252]
We propose Semantic F1 Scores, novel evaluation metrics for subjective or fuzzy multi-label classification.<n>By granting partial credit for semantically related but nonidentical labels, Semantic F1 better reflects the realities of domains marked by human disagreement or fuzzy category boundaries.
arXiv Detail & Related papers (2025-09-25T21:48:48Z) - Bipartite Ranking From Multiple Labels: On Loss Versus Label Aggregation [66.28528968249255]
Bipartite ranking is a fundamental supervised learning problem, with the goal of learning a ranking over instances with maximal area under the ROC curve (AUC) against a single binary target label.
How can one synthesize such labels into a single coherent ranking?
We analyze two approaches to this problem -- loss aggregation and label aggregation -- by characterizing their Bayes-optimal solutions.
arXiv Detail & Related papers (2025-04-15T15:25:27Z) - Multi-Label Classification for Implicit Discourse Relation Recognition [10.280148603465697]
We explore various multi-label classification frameworks to handle implicit discourse relation recognition.
We show that multi-label classification methods don't depress performance for single-label prediction.
arXiv Detail & Related papers (2024-06-06T19:37:25Z) - REPAIR: Rank Correlation and Noisy Pair Half-replacing with Memory for
Noisy Correspondence [36.274879585424635]
The presence of noise in acquired data invariably leads to performance degradation in crossmodal matching.
We propose a framework as Rank corrElation and noisy hAlf wIth memoRy to tackle the mismatched data pair issue.
arXiv Detail & Related papers (2024-03-13T04:01:20Z) - Generating Unbiased Pseudo-labels via a Theoretically Guaranteed
Chebyshev Constraint to Unify Semi-supervised Classification and Regression [57.17120203327993]
threshold-to-pseudo label process (T2L) in classification uses confidence to determine the quality of label.
In nature, regression also requires unbiased methods to generate high-quality labels.
We propose a theoretically guaranteed constraint for generating unbiased labels based on Chebyshev's inequality.
arXiv Detail & Related papers (2023-11-03T08:39:35Z) - Ambiguity-Resistant Semi-Supervised Learning for Dense Object Detection [98.66771688028426]
We propose a Ambiguity-Resistant Semi-supervised Learning (ARSL) for one-stage detectors.
Joint-Confidence Estimation (JCE) is proposed to quantifies the classification and localization quality of pseudo labels.
ARSL effectively mitigates the ambiguities and achieves state-of-the-art SSOD performance on MS COCO and PASCAL VOC.
arXiv Detail & Related papers (2023-03-27T07:46:58Z) - None Class Ranking Loss for Document-Level Relation Extraction [22.173080823450498]
Document-level relation extraction (RE) aims at extracting relations among entities expressed across multiple sentences.
In a typical document, most entity pairs do not express any pre-defined relation and are labeled as "none" or "no relation"
arXiv Detail & Related papers (2022-05-01T14:24:37Z) - Clusterability as an Alternative to Anchor Points When Learning with
Noisy Labels [7.920797564912219]
We propose an efficient estimation procedure based on a clusterability condition.
Compared with methods using anchor points, our approach uses substantially more instances and benefits from a much better sample complexity.
arXiv Detail & Related papers (2021-02-10T07:22:56Z) - Rank-Consistency Deep Hashing for Scalable Multi-Label Image Search [90.30623718137244]
We propose a novel deep hashing method for scalable multi-label image search.
A new rank-consistency objective is applied to align the similarity orders from two spaces.
A powerful loss function is designed to penalize the samples whose semantic similarity and hamming distance are mismatched.
arXiv Detail & Related papers (2021-02-02T13:46:58Z) - R$^2$-Net: Relation of Relation Learning Network for Sentence Semantic
Matching [58.72111690643359]
We propose a Relation of Relation Learning Network (R2-Net) for sentence semantic matching.
We first employ BERT to encode the input sentences from a global perspective.
Then a CNN-based encoder is designed to capture keywords and phrase information from a local perspective.
To fully leverage labels for better relation information extraction, we introduce a self-supervised relation of relation classification task.
arXiv Detail & Related papers (2020-12-16T13:11:30Z) - Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training [49.9995628166064]
We propose CTEG, a model equipped with two mechanisms to learn to decouple easily-confused relations.
On the one hand, an EGA mechanism is introduced to guide the attention to filter out information causing confusion.
On the other hand, a Confusion-Aware Training (CAT) method is proposed to explicitly learn to distinguish relations.
arXiv Detail & Related papers (2020-10-21T11:07:53Z) - Pointwise Binary Classification with Pairwise Confidence Comparisons [97.79518780631457]
We propose pairwise comparison (Pcomp) classification, where we have only pairs of unlabeled data that we know one is more likely to be positive than the other.
We link Pcomp classification to noisy-label learning to develop a progressive URE and improve it by imposing consistency regularization.
arXiv Detail & Related papers (2020-10-05T09:23:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.