GaussianMLR: Learning Implicit Class Significance via Calibrated
Multi-Label Ranking
- URL: http://arxiv.org/abs/2303.03907v1
- Date: Tue, 7 Mar 2023 14:09:08 GMT
- Title: GaussianMLR: Learning Implicit Class Significance via Calibrated
Multi-Label Ranking
- Authors: V. Bugra Yesilkaynak, Emine Dari, Alican Mertan, Gozde Unal
- Abstract summary: We propose a novel multi-label ranking method: GaussianMLR.
It aims to learn implicit class significance values that determine the positive label ranks.
We show that our method is able to accurately learn a representation of the incorporated positive rank order.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Existing multi-label frameworks only exploit the information deduced from the
bipartition of the labels into a positive and negative set. Therefore, they do
not benefit from the ranking order between positive labels, which is the
concept we introduce in this paper. We propose a novel multi-label ranking
method: GaussianMLR, which aims to learn implicit class significance values
that determine the positive label ranks instead of treating them as of equal
importance, by following an approach that unifies ranking and classification
tasks associated with multi-label ranking. Due to the scarcity of public
datasets, we introduce eight synthetic datasets generated under varying
importance factors to provide an enriched and controllable experimental
environment for this study. On both real-world and synthetic datasets, we carry
out extensive comparisons with relevant baselines and evaluate the performance
on both of the two sub-tasks. We show that our method is able to accurately
learn a representation of the incorporated positive rank order, which is not
only consistent with the ground truth but also proportional to the underlying
information. We strengthen our claims empirically by conducting comprehensive
experimental studies. Code is available at
https://github.com/MrGranddy/GaussianMLR.
Related papers
- Leveraging Label Semantics and Meta-Label Refinement for Multi-Label Question Classification [11.19022605804112]
This paper introduces RR2QC, a novel Retrieval Reranking method To multi-label Question Classification.
It uses label semantics and meta-label refinement to enhance personalized learning and resource recommendation.
Experimental results demonstrate that RR2QC outperforms existing classification methods in Precision@k and F1 scores.
arXiv Detail & Related papers (2024-11-04T06:27:14Z) - Drawing the Same Bounding Box Twice? Coping Noisy Annotations in Object
Detection with Repeated Labels [6.872072177648135]
We propose a novel localization algorithm that adapts well-established ground truth estimation methods.
Our algorithm also shows superior performance during training on the TexBiG dataset.
arXiv Detail & Related papers (2023-09-18T13:08:44Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - RLSEP: Learning Label Ranks for Multi-label Classification [0.0]
Multi-label ranking maps instances to a ranked set of predicted labels from multiple possible classes.
We propose a novel dedicated loss function to optimize models by incorporating penalties for incorrectly ranked pairs.
Our method achieves the best reported performance measures on both synthetic and real world ranked datasets.
arXiv Detail & Related papers (2022-12-08T00:59:09Z) - A Unified Positive-Unlabeled Learning Framework for Document-Level
Relation Extraction with Different Levels of Labeling [5.367772036988716]
Document-level relation extraction (RE) aims to identify relations between entities across multiple sentences.
We propose a unified positive-unlabeled learning framework - shift and squared ranking loss.
Our method achieves an improvement of about 14 F1 points relative to the previous baseline with incomplete labeling.
arXiv Detail & Related papers (2022-10-17T02:54:49Z) - Binary Classification with Positive Labeling Sources [71.37692084951355]
We propose WEAPO, a simple yet competitive WS method for producing training labels without negative labeling sources.
We show WEAPO achieves the highest averaged performance on 10 benchmark datasets.
arXiv Detail & Related papers (2022-08-02T19:32:08Z) - Multi-label Classification with High-rank and High-order Label
Correlations [62.39748565407201]
Previous methods capture the high-order label correlations mainly by transforming the label matrix to a latent label space with low-rank matrix factorization.
We propose a simple yet effective method to depict the high-order label correlations explicitly, and at the same time maintain the high-rank of the label matrix.
Comparative studies over twelve benchmark data sets validate the effectiveness of the proposed algorithm in multi-label classification.
arXiv Detail & Related papers (2022-07-09T05:15:31Z) - A Theory-Driven Self-Labeling Refinement Method for Contrastive
Representation Learning [111.05365744744437]
Unsupervised contrastive learning labels crops of the same image as positives, and other image crops as negatives.
In this work, we first prove that for contrastive learning, inaccurate label assignment heavily impairs its generalization for semantic instance discrimination.
Inspired by this theory, we propose a novel self-labeling refinement approach for contrastive learning.
arXiv Detail & Related papers (2021-06-28T14:24:52Z) - Pointwise Binary Classification with Pairwise Confidence Comparisons [97.79518780631457]
We propose pairwise comparison (Pcomp) classification, where we have only pairs of unlabeled data that we know one is more likely to be positive than the other.
We link Pcomp classification to noisy-label learning to develop a progressive URE and improve it by imposing consistency regularization.
arXiv Detail & Related papers (2020-10-05T09:23:58Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.