A Flexible Class of Dependence-aware Multi-Label Loss Functions
- URL: http://arxiv.org/abs/2011.00792v1
- Date: Mon, 2 Nov 2020 07:42:15 GMT
- Title: A Flexible Class of Dependence-aware Multi-Label Loss Functions
- Authors: Eyke H\"ullermeier, Marcel Wever, Eneldo Loza Mencia, Johannes
F\"urnkranz, Michael Rapp
- Abstract summary: This paper introduces a new class of loss functions for multi-label classification.
It overcomes disadvantages of commonly used losses such as Hamming and subset 0/1.
The assessment of multi-labels in terms of these losses is illustrated in an empirical study.
- Score: 4.265467042008983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-label classification is the task of assigning a subset of labels to a
given query instance. For evaluating such predictions, the set of predicted
labels needs to be compared to the ground-truth label set associated with that
instance, and various loss functions have been proposed for this purpose. In
addition to assessing predictive accuracy, a key concern in this regard is to
foster and to analyze a learner's ability to capture label dependencies. In
this paper, we introduce a new class of loss functions for multi-label
classification, which overcome disadvantages of commonly used losses such as
Hamming and subset 0/1. To this end, we leverage the mathematical framework of
non-additive measures and integrals. Roughly speaking, a non-additive measure
allows for modeling the importance of correct predictions of label subsets
(instead of single labels), and thereby their impact on the overall evaluation,
in a flexible way - by giving full importance to single labels and the entire
label set, respectively, Hamming and subset 0/1 are rather extreme in this
regard. We present concrete instantiations of this class, which comprise
Hamming and subset 0/1 as special cases, and which appear to be especially
appealing from a modeling perspective. The assessment of multi-label
classifiers in terms of these losses is illustrated in an empirical study.
Related papers
- Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - An Effective Approach for Multi-label Classification with Missing Labels [8.470008570115146]
We propose a pseudo-label based approach to reduce the cost of annotation without bringing additional complexity to the classification networks.
By designing a novel loss function, we are able to relax the requirement that each instance must contain at least one positive label.
We show that our method can handle the imbalance between positive labels and negative labels, while still outperforming existing missing-label learning approaches.
arXiv Detail & Related papers (2022-10-24T23:13:57Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Learning with Proper Partial Labels [87.65718705642819]
Partial-label learning is a kind of weakly-supervised learning with inexact labels.
We show that this proper partial-label learning framework includes many previous partial-label learning settings.
We then derive a unified unbiased estimator of the classification risk.
arXiv Detail & Related papers (2021-12-23T01:37:03Z) - Unbiased Loss Functions for Multilabel Classification with Missing
Labels [2.1549398927094874]
Missing labels are a ubiquitous phenomenon in extreme multi-label classification (XMC) tasks.
This paper derives the unique unbiased estimators for the different multilabel reductions.
arXiv Detail & Related papers (2021-09-23T10:39:02Z) - Disentangling Sampling and Labeling Bias for Learning in Large-Output
Spaces [64.23172847182109]
We show that different negative sampling schemes implicitly trade-off performance on dominant versus rare labels.
We provide a unified means to explicitly tackle both sampling bias, arising from working with a subset of all labels, and labeling bias, which is inherent to the data due to label imbalance.
arXiv Detail & Related papers (2021-05-12T15:40:13Z) - Learning Gradient Boosted Multi-label Classification Rules [4.842945656927122]
We propose an algorithm for learning multi-label classification rules that is able to minimize decomposable as well as non-decomposable loss functions.
We analyze the abilities and limitations of our approach on synthetic data and evaluate its predictive performance on multi-label benchmarks.
arXiv Detail & Related papers (2020-06-23T21:39:23Z) - Structured Prediction with Partial Labelling through the Infimum Loss [85.4940853372503]
The goal of weak supervision is to enable models to learn using only forms of labelling which are cheaper to collect.
This is a type of incomplete annotation where, for each datapoint, supervision is cast as a set of labels containing the real one.
This paper provides a unified framework based on structured prediction and on the concept of infimum loss to deal with partial labelling.
arXiv Detail & Related papers (2020-03-02T13:59:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.