Rethinking Consistent Multi-Label Classification under Inexact Supervision
- URL: http://arxiv.org/abs/2510.04091v1
- Date: Sun, 05 Oct 2025 08:30:32 GMT
- Title: Rethinking Consistent Multi-Label Classification under Inexact Supervision
- Authors: Wei Wang, Tianhao Ma, Ming-Kun Xie, Gang Niu, Masashi Sugiyama,
- Abstract summary: In partial multi-label learning, each instance is annotated with a candidate label set, among which only some labels are relevant.<n>In complementary multi-label learning, each instance is annotated with complementary labels indicating the classes to which the instance does not belong.
- Score: 60.79309683889278
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Partial multi-label learning and complementary multi-label learning are two popular weakly supervised multi-label classification paradigms that aim to alleviate the high annotation costs of collecting precisely annotated multi-label data. In partial multi-label learning, each instance is annotated with a candidate label set, among which only some labels are relevant; in complementary multi-label learning, each instance is annotated with complementary labels indicating the classes to which the instance does not belong. Existing consistent approaches for the two paradigms either require accurate estimation of the generation process of candidate or complementary labels or assume a uniform distribution to eliminate the estimation problem. However, both conditions are usually difficult to satisfy in real-world scenarios. In this paper, we propose consistent approaches that do not rely on the aforementioned conditions to handle both problems in a unified way. Specifically, we propose two unbiased risk estimators based on first- and second-order strategies. Theoretically, we prove consistency w.r.t. two widely used multi-label classification evaluation metrics and derive convergence rates for the estimation errors of the proposed risk estimators. Empirically, extensive experimental results validate the effectiveness of our proposed approaches against state-of-the-art methods.
Related papers
- Learning from Similarity-Confidence and Confidence-Difference [0.07646713951724009]
We propose a novel Weakly Supervised Learning (WSL) framework that leverages complementary weak supervision signals from multiple perspectives.<n>Specifically, we introduce SconfConfDiff Classification, a method that integrates two distinct forms of weaklabels.<n>We prove that both estimators achieve optimal convergence rates with respect to estimation error bounds.
arXiv Detail & Related papers (2025-08-07T07:42:59Z) - Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning [81.83013974171364]
Semi-supervised multi-label learning (SSMLL) is a powerful framework for leveraging unlabeled data to reduce the expensive cost of collecting precise multi-label annotations.<n>Unlike semi-supervised learning, one cannot select the most probable label as the pseudo-label in SSMLL due to multiple semantics contained in an instance.<n>We propose a dual-perspective method to generate high-quality pseudo-labels.
arXiv Detail & Related papers (2024-07-26T09:33:53Z) - Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical [66.57396042747706]
Complementary-label learning is a weakly supervised learning problem.
We propose a consistent approach that does not rely on the uniform distribution assumption.
We find that complementary-label learning can be expressed as a set of negative-unlabeled binary classification problems.
arXiv Detail & Related papers (2023-11-27T02:59:17Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Learning with Proper Partial Labels [87.65718705642819]
Partial-label learning is a kind of weakly-supervised learning with inexact labels.
We show that this proper partial-label learning framework includes many previous partial-label learning settings.
We then derive a unified unbiased estimator of the classification risk.
arXiv Detail & Related papers (2021-12-23T01:37:03Z) - Multi-Complementary and Unlabeled Learning for Arbitrary Losses and
Models [6.177038245239757]
We propose a novel multi-complementary and unlabeled learning framework.
We first give an unbiased estimator of the classification risk from samples with multiple complementary labels.
We then further improve the estimator by incorporating unlabeled samples into the risk formulation.
arXiv Detail & Related papers (2020-01-13T13:52:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.