Conformal Credal Self-Supervised Learning
- URL: http://arxiv.org/abs/2205.15239v2
- Date: Fri, 9 Jun 2023 13:30:44 GMT
- Title: Conformal Credal Self-Supervised Learning
- Authors: Julian Lienen, Caglar Demir, Eyke H\"ullermeier
- Abstract summary: In semi-supervised learning, the paradigm of self-training refers to the idea of learning from pseudo-labels suggested by the learner itself.
One such method, so-called credal self-supervised learning, maintains pseudo-supervision in the form of sets of (instead of single) probability distributions over labels.
- Score: 7.170735702082675
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In semi-supervised learning, the paradigm of self-training refers to the idea
of learning from pseudo-labels suggested by the learner itself. Across various
domains, corresponding methods have proven effective and achieve
state-of-the-art performance. However, pseudo-labels typically stem from ad-hoc
heuristics, relying on the quality of the predictions though without
guaranteeing their validity. One such method, so-called credal self-supervised
learning, maintains pseudo-supervision in the form of sets of (instead of
single) probability distributions over labels, thereby allowing for a flexible
yet uncertainty-aware labeling. Again, however, there is no justification
beyond empirical effectiveness. To address this deficiency, we make use of
conformal prediction, an approach that comes with guarantees on the validity of
set-valued predictions. As a result, the construction of credal sets of labels
is supported by a rigorous theoretical foundation, leading to better calibrated
and less error-prone supervision for unlabeled data. Along with this, we
present effective algorithms for learning from credal self-supervision. An
empirical study demonstrates excellent calibration properties of the
pseudo-supervision, as well as the competitiveness of our method on several
benchmark datasets.
Related papers
- Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - Robust Representation Learning for Unreliable Partial Label Learning [86.909511808373]
Partial Label Learning (PLL) is a type of weakly supervised learning where each training instance is assigned a set of candidate labels, but only one label is the ground-truth.
This is known as Unreliable Partial Label Learning (UPLL) that introduces an additional complexity due to the inherent unreliability and ambiguity of partial labels.
We propose the Unreliability-Robust Representation Learning framework (URRL) that leverages unreliability-robust contrastive learning to help the model fortify against unreliable partial labels effectively.
arXiv Detail & Related papers (2023-08-31T13:37:28Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - Debiased Pseudo Labeling in Self-Training [77.83549261035277]
Deep neural networks achieve remarkable performances on a wide range of tasks with the aid of large-scale labeled datasets.
To mitigate the requirement for labeled data, self-training is widely used in both academia and industry by pseudo labeling on readily-available unlabeled data.
We propose Debiased, in which the generation and utilization of pseudo labels are decoupled by two independent heads.
arXiv Detail & Related papers (2022-02-15T02:14:33Z) - Predictive Inference with Weak Supervision [3.1925030748447747]
We bridge the gap between partial supervision and validation by developing a conformal prediction framework.
We introduce a new notion of coverage and predictive validity, then develop several application scenarios.
We corroborate the hypothesis that the new coverage definition allows for tighter and more informative (but valid) confidence sets.
arXiv Detail & Related papers (2022-01-20T17:26:52Z) - Learning with Proper Partial Labels [87.65718705642819]
Partial-label learning is a kind of weakly-supervised learning with inexact labels.
We show that this proper partial-label learning framework includes many previous partial-label learning settings.
We then derive a unified unbiased estimator of the classification risk.
arXiv Detail & Related papers (2021-12-23T01:37:03Z) - Uncertainty-aware Mean Teacher for Source-free Unsupervised Domain
Adaptive 3D Object Detection [6.345037597566315]
Pseudo-label based self training approaches are a popular method for source-free unsupervised domain adaptation.
We propose an uncertainty-aware mean teacher framework which implicitly filters incorrect pseudo-labels during training.
arXiv Detail & Related papers (2021-09-29T18:17:09Z) - Credal Self-Supervised Learning [0.0]
We show how to let the learner generate "pseudo-supervision" for unlabeled instances.
In combination with consistency regularization, pseudo-labeling has shown promising performance in various domains.
We compare our methodology to state-of-the-art self-supervision approaches.
arXiv Detail & Related papers (2021-06-22T15:19:04Z) - Exploiting Sample Uncertainty for Domain Adaptive Person
Re-Identification [137.9939571408506]
We estimate and exploit the credibility of the assigned pseudo-label of each sample to alleviate the influence of noisy labels.
Our uncertainty-guided optimization brings significant improvement and achieves the state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2020-12-16T04:09:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.