Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training
- URL: http://arxiv.org/abs/2006.11280v1
- Date: Mon, 22 Jun 2020 17:53:59 GMT
- Title: Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training
- Authors: Xuxi Chen, Wuyang Chen, Tianlong Chen, Ye Yuan, Chen Gong, Kewei Chen,
Zhangyang Wang
- Abstract summary: We propose a novel Self-PU learning framework, which seamlessly integrates PU learning and self-training.
Self-PU highlights three "self"-oriented building blocks: a self-paced training algorithm that adaptively discovers and augments confident examples as the training proceeds.
We study a real-world application of PU learning, i.e., classifying brain images of Alzheimer's Disease.
- Score: 118.10946662410639
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many real-world applications have to tackle the Positive-Unlabeled (PU)
learning problem, i.e., learning binary classifiers from a large amount of
unlabeled data and a few labeled positive examples. While current
state-of-the-art methods employ importance reweighting to design various risk
estimators, they ignored the learning capability of the model itself, which
could have provided reliable supervision. This motivates us to propose a novel
Self-PU learning framework, which seamlessly integrates PU learning and
self-training. Self-PU highlights three "self"-oriented building blocks: a
self-paced training algorithm that adaptively discovers and augments confident
positive/negative examples as the training proceeds; a self-calibrated
instance-aware loss; and a self-distillation scheme that introduces
teacher-students learning as an effective regularization for PU learning. We
demonstrate the state-of-the-art performance of Self-PU on common PU learning
benchmarks (MNIST and CIFAR-10), which compare favorably against the latest
competitors. Moreover, we study a real-world application of PU learning, i.e.,
classifying brain images of Alzheimer's Disease. Self-PU obtains significantly
improved results on the renowned Alzheimer's Disease Neuroimaging Initiative
(ADNI) database over existing methods. The code is publicly available at:
https://github.com/TAMU-VITA/Self-PU.
Related papers
- Training Language Models to Self-Correct via Reinforcement Learning [98.35197671595343]
Self-correction has been found to be largely ineffective in modern large language models (LLMs)
We develop a multi-turn online reinforcement learning approach, SCoRe, that significantly improves an LLM's self-correction ability using entirely self-generated data.
We find that SCoRe achieves state-of-the-art self-correction performance, improving the base models' self-correction by 15.6% and 9.1% respectively on MATH and HumanEval.
arXiv Detail & Related papers (2024-09-19T17:16:21Z) - Unlearning with Control: Assessing Real-world Utility for Large Language Model Unlearning [97.2995389188179]
Recent research has begun to approach large language models (LLMs) unlearning via gradient ascent (GA)
Despite their simplicity and efficiency, we suggest that GA-based methods face the propensity towards excessive unlearning.
We propose several controlling methods that can regulate the extent of excessive unlearning.
arXiv Detail & Related papers (2024-06-13T14:41:00Z) - LLMs Could Autonomously Learn Without External Supervision [36.36147944680502]
Large Language Models (LLMs) have traditionally been tethered to human-annotated datasets and predefined training objectives.
This paper presents a transformative approach: Autonomous Learning for LLMs.
This method endows LLMs with the ability to self-educate through direct interaction with text, akin to a human reading and comprehending literature.
arXiv Detail & Related papers (2024-06-02T03:36:37Z) - Incremental Self-training for Semi-supervised Learning [56.57057576885672]
IST is simple yet effective and fits existing self-training-based semi-supervised learning methods.
We verify the proposed IST on five datasets and two types of backbone, effectively improving the recognition accuracy and learning speed.
arXiv Detail & Related papers (2024-04-14T05:02:00Z) - SELFI: Autonomous Self-Improvement with Reinforcement Learning for Social Navigation [54.97931304488993]
Self-improving robots that interact and improve with experience are key to the real-world deployment of robotic systems.
We propose an online learning method, SELFI, that leverages online robot experience to rapidly fine-tune pre-trained control policies.
We report improvements in terms of collision avoidance, as well as more socially compliant behavior, measured by a human user study.
arXiv Detail & Related papers (2024-03-01T21:27:03Z) - Contrastive Approach to Prior Free Positive Unlabeled Learning [15.269090018352875]
We propose a novel PU learning framework, that starts by learning a feature space through pretext-invariant representation learning.
Our proposed approach handily outperforms state-of-the-art PU learning methods across several standard PU benchmark datasets.
arXiv Detail & Related papers (2024-02-08T20:20:54Z) - Automated Machine Learning for Positive-Unlabelled Learning [1.450405446885067]
Positive-Unlabelled (PU) learning is a growing field of machine learning.
We propose two new Auto-ML systems for PU learning: BO-Auto-PU, based on a Bayesian optimisation approach, and EBO-Auto-PU, based on a novel evolutionary/Bayesian optimisation approach.
We also present an extensive evaluation of the three Auto-ML systems, comparing them to each other and to well-established PU learning methods across 60 datasets.
arXiv Detail & Related papers (2024-01-12T08:54:34Z) - Self-Supervised Learning for Audio-Based Emotion Recognition [1.7598252755538808]
Self-supervised learning is a family of methods which can learn despite a scarcity of supervised labels.
We have applied self-supervised learning pre-training to the classification of emotions from the CMU- MOSEI's acoustic modality.
We find that self-supervised learning consistently improves the performance of the model across all metrics.
arXiv Detail & Related papers (2023-07-23T14:40:50Z) - Improving Self-supervised Learning with Automated Unsupervised Outlier
Arbitration [83.29856873525674]
We introduce a lightweight latent variable model UOTA, targeting the view sampling issue for self-supervised learning.
Our method directly generalizes to many mainstream self-supervised learning approaches.
arXiv Detail & Related papers (2021-12-15T14:05:23Z) - Episodic Self-Imitation Learning with Hindsight [7.743320290728377]
Episodic self-imitation learning is a novel self-imitation algorithm with a trajectory selection module and an adaptive loss function.
A selection module is introduced to filter uninformative samples from each episode of the update.
Episodic self-imitation learning has the potential to be applied to real-world problems that have continuous action spaces.
arXiv Detail & Related papers (2020-11-26T20:36:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.