RLSEP: Learning Label Ranks for Multi-label Classification
- URL: http://arxiv.org/abs/2212.04022v1
- Date: Thu, 8 Dec 2022 00:59:09 GMT
- Title: RLSEP: Learning Label Ranks for Multi-label Classification
- Authors: Emine Dari, V. Bugra Yesilkaynak, Alican Mertan and Gozde Unal
- Abstract summary: Multi-label ranking maps instances to a ranked set of predicted labels from multiple possible classes.
We propose a novel dedicated loss function to optimize models by incorporating penalties for incorrectly ranked pairs.
Our method achieves the best reported performance measures on both synthetic and real world ranked datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Multi-label ranking maps instances to a ranked set of predicted labels from
multiple possible classes. The ranking approach for multi-label learning
problems received attention for its success in multi-label classification, with
one of the well-known approaches being pairwise label ranking. However, most
existing methods assume that only partial information about the preference
relation is known, which is inferred from the partition of labels into a
positive and negative set, then treat labels with equal importance. In this
paper, we focus on the unique challenge of ranking when the order of the true
label set is provided. We propose a novel dedicated loss function to optimize
models by incorporating penalties for incorrectly ranked pairs, and make use of
the ranking information present in the input. Our method achieves the best
reported performance measures on both synthetic and real world ranked datasets
and shows improvements on overall ranking of labels. Our experimental results
demonstrate that our approach is generalizable to a variety of multi-label
classification and ranking tasks, while revealing a calibration towards a
certain ranking ordering.
Related papers
- Online Multi-Label Classification under Noisy and Changing Label Distribution [9.17381554071824]
We propose an online multi-label classification algorithm under Noisy and Changing Label Distribution (NCLD)
The objective is to simultaneously model the label scoring and the label ranking for high accuracy, whose robustness to NCLD benefits from three novel works.
arXiv Detail & Related papers (2024-10-03T11:16:43Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - GaussianMLR: Learning Implicit Class Significance via Calibrated
Multi-Label Ranking [0.0]
We propose a novel multi-label ranking method: GaussianMLR.
It aims to learn implicit class significance values that determine the positive label ranks.
We show that our method is able to accurately learn a representation of the incorporated positive rank order.
arXiv Detail & Related papers (2023-03-07T14:09:08Z) - An Effective Approach for Multi-label Classification with Missing Labels [8.470008570115146]
We propose a pseudo-label based approach to reduce the cost of annotation without bringing additional complexity to the classification networks.
By designing a novel loss function, we are able to relax the requirement that each instance must contain at least one positive label.
We show that our method can handle the imbalance between positive labels and negative labels, while still outperforming existing missing-label learning approaches.
arXiv Detail & Related papers (2022-10-24T23:13:57Z) - Multi-label Classification with High-rank and High-order Label
Correlations [62.39748565407201]
Previous methods capture the high-order label correlations mainly by transforming the label matrix to a latent label space with low-rank matrix factorization.
We propose a simple yet effective method to depict the high-order label correlations explicitly, and at the same time maintain the high-rank of the label matrix.
Comparative studies over twelve benchmark data sets validate the effectiveness of the proposed algorithm in multi-label classification.
arXiv Detail & Related papers (2022-07-09T05:15:31Z) - Acknowledging the Unknown for Multi-label Learning with Single Positive
Labels [65.5889334964149]
Traditionally, all unannotated labels are assumed as negative labels in single positive multi-label learning (SPML)
We propose entropy-maximization (EM) loss to maximize the entropy of predicted probabilities for all unannotated labels.
Considering the positive-negative label imbalance of unannotated labels, we propose asymmetric pseudo-labeling (APL) with asymmetric-tolerance strategies and a self-paced procedure to provide more precise supervision.
arXiv Detail & Related papers (2022-03-30T11:43:59Z) - Multi-label Ranking: Mining Multi-label and Label Ranking Data [1.8275108630751844]
We highlight the unique challenges, and re-categorize the methods, as they no longer fit into the traditional categories of transformation and adaptation.
We survey developments in the last demi-decade, with a special focus on state-of-the-art methods in deep learning multi-label mining, extreme multi-label classification and label ranking.
arXiv Detail & Related papers (2021-01-03T08:36:45Z) - SPL-MLL: Selecting Predictable Landmarks for Multi-Label Learning [87.27700889147144]
We propose to select a small subset of labels as landmarks which are easy to predict according to input (predictable) and can well recover the other possible labels (representative)
We employ the Alternating Direction Method (ADM) to solve our problem. Empirical studies on real-world datasets show that our method achieves superior classification performance over other state-of-the-art methods.
arXiv Detail & Related papers (2020-08-16T11:07:44Z) - Interaction Matching for Long-Tail Multi-Label Classification [57.262792333593644]
We present an elegant and effective approach for addressing limitations in existing multi-label classification models.
By performing soft n-gram interaction matching, we match labels with natural language descriptions.
arXiv Detail & Related papers (2020-05-18T15:27:55Z) - Unsupervised Person Re-identification via Multi-label Classification [55.65870468861157]
This paper formulates unsupervised person ReID as a multi-label classification task to progressively seek true labels.
Our method starts by assigning each person image with a single-class label, then evolves to multi-label classification by leveraging the updated ReID model for label prediction.
To boost the ReID model training efficiency in multi-label classification, we propose the memory-based multi-label classification loss (MMCL)
arXiv Detail & Related papers (2020-04-20T12:13:43Z) - Improving Label Ranking Ensembles using Boosting Techniques [13.782477759025348]
Boosting is a well-known and reliable ensemble technique that was shown to often outperform other learning algorithms.
In this paper, we propose a boosting algorithm which was specifically designed for label ranking tasks.
Extensive evaluation of the proposed algorithm on 24 semi-synthetic and real-world label ranking datasets shows that it significantly outperforms existing state-of-the-art label ranking algorithms.
arXiv Detail & Related papers (2020-01-21T19:16:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.