Joint empirical risk minimization for instance-dependent
positive-unlabeled data
- URL: http://arxiv.org/abs/2312.16557v1
- Date: Wed, 27 Dec 2023 12:45:12 GMT
- Title: Joint empirical risk minimization for instance-dependent
positive-unlabeled data
- Authors: Wojciech Rejchel, Pawe{\l} Teisseyre, Jan Mielniczuk
- Abstract summary: Learning from positive and unlabeled data (PU learning) is actively researched machine learning task.
The goal is to train a binary classification model based on a dataset containing part on positives which are labeled, and unlabeled instances.
Unlabeled set includes remaining part positives and all negative observations.
- Score: 4.112909937203119
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning from positive and unlabeled data (PU learning) is actively
researched machine learning task. The goal is to train a binary classification
model based on a training dataset containing part of positives which are
labeled, and unlabeled instances. Unlabeled set includes remaining part of
positives and all negative observations. An important element in PU learning is
modeling of the labeling mechanism, i.e. labels' assignment to positive
observations. Unlike in many prior works, we consider a realistic setting for
which probability of label assignment, i.e. propensity score, is
instance-dependent. In our approach we investigate minimizer of an empirical
counterpart of a joint risk which depends on both posterior probability of
inclusion in a positive class as well as on a propensity score. The non-convex
empirical risk is alternately optimised with respect to parameters of both
functions. In the theoretical analysis we establish risk consistency of the
minimisers using recently derived methods from the theory of empirical
processes. Besides, the important development here is a proposed novel
implementation of an optimisation algorithm, for which sequential approximation
of a set of positive observations among unlabeled ones is crucial. This relies
on modified technique of 'spies' as well as on a thresholding rule based on
conditional probabilities. Experiments conducted on 20 data sets for various
labeling scenarios show that the proposed method works on par or more
effectively than state-of-the-art methods based on propensity function
estimation.
Related papers
- Probably Approximately Precision and Recall Learning [62.912015491907994]
Precision and Recall are foundational metrics in machine learning.
One-sided feedback--where only positive examples are observed during training--is inherent in many practical problems.
We introduce a PAC learning framework where each hypothesis is represented by a graph, with edges indicating positive interactions.
arXiv Detail & Related papers (2024-11-20T04:21:07Z) - An Unbiased Risk Estimator for Partial Label Learning with Augmented Classes [46.663081214928226]
We propose an unbiased risk estimator with theoretical guarantees for PLLAC.
We provide a theoretical analysis of the estimation error bound of PLLAC.
Experiments on benchmark, UCI and real-world datasets demonstrate the effectiveness of the proposed approach.
arXiv Detail & Related papers (2024-09-29T07:36:16Z) - Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical [66.57396042747706]
Complementary-label learning is a weakly supervised learning problem.
We propose a consistent approach that does not rely on the uniform distribution assumption.
We find that complementary-label learning can be expressed as a set of negative-unlabeled binary classification problems.
arXiv Detail & Related papers (2023-11-27T02:59:17Z) - Mixture Proportion Estimation and PU Learning: A Modern Approach [47.34499672878859]
Given only positive examples and unlabeled examples, we might hope to estimate an accurate positive-versus-negative classifier.
classical methods for both problems break down in high-dimensional settings.
We propose two simple techniques: Best Bin Estimation (BBE) and Value Ignoring Risk (CVIR)
arXiv Detail & Related papers (2021-11-01T14:42:23Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Learning from Positive and Unlabeled Data with Arbitrary Positive Shift [11.663072799764542]
This paper shows that PU learning is possible even with arbitrarily non-representative positive data given unlabeled data.
We integrate this into two statistically consistent methods to address arbitrary positive bias.
Experimental results demonstrate our methods' effectiveness across numerous real-world datasets.
arXiv Detail & Related papers (2020-02-24T13:53:22Z) - Progressive Identification of True Labels for Partial-Label Learning [112.94467491335611]
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
Most existing methods elaborately designed as constrained optimizations that must be solved in specific manners, making their computational complexity a bottleneck for scaling up to big data.
This paper proposes a novel framework of classifier with flexibility on the model and optimization algorithm.
arXiv Detail & Related papers (2020-02-19T08:35:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.