Collective Loss Function for Positive and Unlabeled Learning
- URL: http://arxiv.org/abs/2005.03228v1
- Date: Wed, 6 May 2020 03:30:22 GMT
- Title: Collective Loss Function for Positive and Unlabeled Learning
- Authors: Chenhao Xie, Qiao Cheng, Jiaqing Liang, Lihan Chen, Yanghua Xiao
- Abstract summary: We propose a Collectively loss function to learn from only Positive and Unlabeled data.
Results show that cPU consistently outperforms the current state-of-the-art PU learning methods.
- Score: 19.058269616452545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: People learn to discriminate between classes without explicit exposure to
negative examples. On the contrary, traditional machine learning algorithms
often rely on negative examples, otherwise the model would be prone to collapse
and always-true predictions. Therefore, it is crucial to design the learning
objective which leads the model to converge and to perform predictions
unbiasedly without explicit negative signals. In this paper, we propose a
Collectively loss function to learn from only Positive and Unlabeled data
(cPU). We theoretically elicit the loss function from the setting of PU
learning. We perform intensive experiments on the benchmark and real-world
datasets. The results show that cPU consistently outperforms the current
state-of-the-art PU learning methods.
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Contrastive Learning with Negative Sampling Correction [52.990001829393506]
We propose a novel contrastive learning method named Positive-Unlabeled Contrastive Learning (PUCL)
PUCL treats the generated negative samples as unlabeled samples and uses information from positive samples to correct bias in contrastive loss.
PUCL can be applied to general contrastive learning problems and outperforms state-of-the-art methods on various image and graph classification tasks.
arXiv Detail & Related papers (2024-01-13T11:18:18Z) - Beyond Myopia: Learning from Positive and Unlabeled Data through
Holistic Predictive Trends [26.79150786180822]
We unveil an intriguing yet long-overlooked observation in PUL.
Predictive trends for positive and negative classes display distinctly different patterns.
We propose a novel TPP-inspired measure for trend detection and prove its unbiasedness in predicting changes.
arXiv Detail & Related papers (2023-10-06T08:06:15Z) - Robust Positive-Unlabeled Learning via Noise Negative Sample
Self-correction [48.929877651182885]
Learning from positive and unlabeled data is known as positive-unlabeled (PU) learning in literature.
We propose a new robust PU learning method with a training strategy motivated by the nature of human learning.
arXiv Detail & Related papers (2023-08-01T04:34:52Z) - Learning from Positive and Unlabeled Data with Augmented Classes [17.97372291914351]
We propose an unbiased risk estimator for PU learning with Augmented Classes (PUAC)
We derive the estimation error bound for the proposed estimator, which provides a theoretical guarantee for its convergence to the optimal solution.
arXiv Detail & Related papers (2022-07-27T03:40:50Z) - Equivariance and Invariance Inductive Bias for Learning from
Insufficient Data [65.42329520528223]
We show why insufficient data renders the model more easily biased to the limited training environments that are usually different from testing.
We propose a class-wise invariant risk minimization (IRM) that efficiently tackles the challenge of missing environmental annotation in conventional IRM.
arXiv Detail & Related papers (2022-07-25T15:26:19Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Agree to Disagree: Diversity through Disagreement for Better
Transferability [54.308327969778155]
We propose D-BAT (Diversity-By-disAgreement Training), which enforces agreement among the models on the training data.
We show how D-BAT naturally emerges from the notion of generalized discrepancy.
arXiv Detail & Related papers (2022-02-09T12:03:02Z) - Dense Out-of-Distribution Detection by Robust Learning on Synthetic
Negative Data [1.7474352892977458]
We show how to detect out-of-distribution anomalies in road-driving scenes and remote sensing imagery.
We leverage a jointly trained normalizing flow due to coverage-oriented learning objective and the capability to generate samples at different resolutions.
The resulting models set the new state of the art on benchmarks for out-of-distribution detection in road-driving scenes and remote sensing imagery.
arXiv Detail & Related papers (2021-12-23T20:35:10Z) - MixPUL: Consistency-based Augmentation for Positive and Unlabeled
Learning [8.7382177147041]
We propose a simple yet effective data augmentation method, coinedalgo, based on emphconsistency regularization.
algoincorporates supervised and unsupervised consistency training to generate augmented data.
We show thatalgoachieves an averaged improvement of classification error from 16.49 to 13.09 on the CIFAR-10 dataset across different positive data amount.
arXiv Detail & Related papers (2020-04-20T15:43:33Z) - Learning from Positive and Unlabeled Data with Arbitrary Positive Shift [11.663072799764542]
This paper shows that PU learning is possible even with arbitrarily non-representative positive data given unlabeled data.
We integrate this into two statistically consistent methods to address arbitrary positive bias.
Experimental results demonstrate our methods' effectiveness across numerous real-world datasets.
arXiv Detail & Related papers (2020-02-24T13:53:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.