Credal Self-Supervised Learning
- URL: http://arxiv.org/abs/2106.11853v1
- Date: Tue, 22 Jun 2021 15:19:04 GMT
- Title: Credal Self-Supervised Learning
- Authors: Julian Lienen, Eyke H\"ullermeier
- Abstract summary: We show how to let the learner generate "pseudo-supervision" for unlabeled instances.
In combination with consistency regularization, pseudo-labeling has shown promising performance in various domains.
We compare our methodology to state-of-the-art self-supervision approaches.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-training is an effective approach to semi-supervised learning. The key
idea is to let the learner itself iteratively generate "pseudo-supervision" for
unlabeled instances based on its current hypothesis. In combination with
consistency regularization, pseudo-labeling has shown promising performance in
various domains, for example in computer vision. To account for the
hypothetical nature of the pseudo-labels, these are commonly provided in the
form of probability distributions. Still, one may argue that even a probability
distribution represents an excessive level of informedness, as it suggests that
the learner precisely knows the ground-truth conditional probabilities. In our
approach, we therefore allow the learner to label instances in the form of
credal sets, that is, sets of (candidate) probability distributions. Thanks to
this increased expressiveness, the learner is able to represent uncertainty and
a lack of knowledge in a more flexible and more faithful manner. To learn from
weakly labeled data of that kind, we leverage methods that have recently been
proposed in the realm of so-called superset learning. In an exhaustive
empirical evaluation, we compare our methodology to state-of-the-art
self-supervision approaches, showing competitive to superior performance
especially in low-label scenarios incorporating a high degree of uncertainty.
Related papers
- Self-Knowledge Distillation for Learning Ambiguity [11.755814660833549]
Recent language models often over-confidently predict a single label without consideration for its correctness.
We propose a novel self-knowledge distillation method that enables models to learn label distributions more accurately.
We validate our method on diverse NLU benchmark datasets and the experimental results demonstrate its effectiveness in producing better label distributions.
arXiv Detail & Related papers (2024-06-14T05:11:32Z) - Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical [66.57396042747706]
Complementary-label learning is a weakly supervised learning problem.
We propose a consistent approach that does not rely on the uniform distribution assumption.
We find that complementary-label learning can be expressed as a set of negative-unlabeled binary classification problems.
arXiv Detail & Related papers (2023-11-27T02:59:17Z) - Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - Robust Representation Learning for Unreliable Partial Label Learning [86.909511808373]
Partial Label Learning (PLL) is a type of weakly supervised learning where each training instance is assigned a set of candidate labels, but only one label is the ground-truth.
This is known as Unreliable Partial Label Learning (UPLL) that introduces an additional complexity due to the inherent unreliability and ambiguity of partial labels.
We propose the Unreliability-Robust Representation Learning framework (URRL) that leverages unreliability-robust contrastive learning to help the model fortify against unreliable partial labels effectively.
arXiv Detail & Related papers (2023-08-31T13:37:28Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - Uncertainty-aware Label Distribution Learning for Facial Expression
Recognition [13.321770808076398]
We propose a new uncertainty-aware label distribution learning method to improve the robustness of deep models against uncertainty and ambiguity.
Our method can be easily integrated into a deep network to obtain more training supervision and improve recognition accuracy.
arXiv Detail & Related papers (2022-09-21T15:48:41Z) - Going Beyond One-Hot Encoding in Classification: Can Human Uncertainty
Improve Model Performance? [14.610038284393166]
We show that label uncertainty is explicitly embedded into the training process via distributional labels.
The incorporation of label uncertainty helps the model to generalize better to unseen data and increases model performance.
Similar to existing calibration methods, the distributional labels lead to better-calibrated probabilities, which in turn yield more certain and trustworthy predictions.
arXiv Detail & Related papers (2022-05-30T17:19:11Z) - Conformal Credal Self-Supervised Learning [7.170735702082675]
In semi-supervised learning, the paradigm of self-training refers to the idea of learning from pseudo-labels suggested by the learner itself.
One such method, so-called credal self-supervised learning, maintains pseudo-supervision in the form of sets of (instead of single) probability distributions over labels.
arXiv Detail & Related papers (2022-05-30T16:53:16Z) - Learning with Proper Partial Labels [87.65718705642819]
Partial-label learning is a kind of weakly-supervised learning with inexact labels.
We show that this proper partial-label learning framework includes many previous partial-label learning settings.
We then derive a unified unbiased estimator of the classification risk.
arXiv Detail & Related papers (2021-12-23T01:37:03Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - An Effective Baseline for Robustness to Distributional Shift [5.627346969563955]
Refraining from confidently predicting when faced with categories of inputs different from those seen during training is an important requirement for the safe deployment of deep learning systems.
We present a simple, but highly effective approach to deal with out-of-distribution detection that uses the principle of abstention.
arXiv Detail & Related papers (2021-05-15T00:46:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.