Adaptive Positive-Unlabelled Learning via Markov Diffusion
- URL: http://arxiv.org/abs/2108.06158v1
- Date: Fri, 13 Aug 2021 10:25:47 GMT
- Title: Adaptive Positive-Unlabelled Learning via Markov Diffusion
- Authors: Paola Stolfi, Andrea Mastropietro, Giuseppe Pasculli, Paolo Tieri,
Davide Vergni
- Abstract summary: Positive-Unlabelled (PU) learning is the machine learning setting in which only a set of positive instances are labelled.
The principal aim of the algorithm is to identify a set of instances which are likely to contain positive instances that were originally unlabelled.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Positive-Unlabelled (PU) learning is the machine learning setting in which
only a set of positive instances are labelled, while the rest of the data set
is unlabelled. The unlabelled instances may be either unspecified positive
samples or true negative samples. Over the years, many solutions have been
proposed to deal with PU learning. Some techniques consider the unlabelled
samples as negative ones, reducing the problem to a binary classification with
a noisy negative set, while others aim to detect sets of possible negative
examples to later apply a supervised machine learning strategy (two-step
techniques). The approach proposed in this work falls in the latter category
and works in a semi-supervised fashion: motivated and inspired by previous
works, a Markov diffusion process with restart is used to assign pseudo-labels
to unlabelled instances. Afterward, a machine learning model, exploiting the
newly assigned classes, is trained. The principal aim of the algorithm is to
identify a set of instances which are likely to contain positive instances that
were originally unlabelled.
Related papers
- Contrastive Approach to Prior Free Positive Unlabeled Learning [15.269090018352875]
We propose a novel PU learning framework, that starts by learning a feature space through pretext-invariant representation learning.
Our proposed approach handily outperforms state-of-the-art PU learning methods across several standard PU benchmark datasets.
arXiv Detail & Related papers (2024-02-08T20:20:54Z) - Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical [66.57396042747706]
Complementary-label learning is a weakly supervised learning problem.
We propose a consistent approach that does not rely on the uniform distribution assumption.
We find that complementary-label learning can be expressed as a set of negative-unlabeled binary classification problems.
arXiv Detail & Related papers (2023-11-27T02:59:17Z) - Robust Positive-Unlabeled Learning via Noise Negative Sample
Self-correction [48.929877651182885]
Learning from positive and unlabeled data is known as positive-unlabeled (PU) learning in literature.
We propose a new robust PU learning method with a training strategy motivated by the nature of human learning.
arXiv Detail & Related papers (2023-08-01T04:34:52Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Positive Unlabeled Contrastive Learning [14.975173394072053]
We extend the self-supervised pretraining paradigm to the classical positive unlabeled (PU) setting.
We develop a simple methodology to pseudo-label the unlabeled samples using a new PU-specific clustering scheme.
Our method handily outperforms state-of-the-art PU methods over several standard PU benchmark datasets.
arXiv Detail & Related papers (2022-06-01T20:16:32Z) - Learning with Proper Partial Labels [87.65718705642819]
Partial-label learning is a kind of weakly-supervised learning with inexact labels.
We show that this proper partial-label learning framework includes many previous partial-label learning settings.
We then derive a unified unbiased estimator of the classification risk.
arXiv Detail & Related papers (2021-12-23T01:37:03Z) - Minimax Active Learning [61.729667575374606]
Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator.
Current active learning techniques either rely on model uncertainty to select the most uncertain samples or use clustering or reconstruction to choose the most diverse set of unlabeled examples.
We develop a semi-supervised minimax entropy-based active learning algorithm that leverages both uncertainty and diversity in an adversarial manner.
arXiv Detail & Related papers (2020-12-18T19:03:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.