Meta-learning for Positive-unlabeled Classification
- URL: http://arxiv.org/abs/2406.03680v1
- Date: Thu, 6 Jun 2024 01:50:01 GMT
- Title: Meta-learning for Positive-unlabeled Classification
- Authors: Atsutoshi Kumagai, Tomoharu Iwata, Yasuhiro Fujiwara,
- Abstract summary: The proposed method minimizes the test classification risk after the model is adapted to PU data.
The method embeds each instance into a task-specific space using neural networks.
We empirically show that the proposed method outperforms existing methods with one synthetic and three real-world datasets.
- Score: 40.11462237689747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a meta-learning method for positive and unlabeled (PU) classification, which improves the performance of binary classifiers obtained from only PU data in unseen target tasks. PU learning is an important problem since PU data naturally arise in real-world applications such as outlier detection and information retrieval. Existing PU learning methods require many PU data, but sufficient data are often unavailable in practice. The proposed method minimizes the test classification risk after the model is adapted to PU data by using related tasks that consist of positive, negative, and unlabeled data. We formulate the adaptation as an estimation problem of the Bayes optimal classifier, which is an optimal classifier to minimize the classification risk. The proposed method embeds each instance into a task-specific space using neural networks. With the embedded PU data, the Bayes optimal classifier is estimated through density-ratio estimation of PU densities, whose solution is obtained as a closed-form solution. The closed-form solution enables us to efficiently and effectively minimize the test classification risk. We empirically show that the proposed method outperforms existing methods with one synthetic and three real-world datasets.
Related papers
- PUAL: A Classifier on Trifurcate Positive-Unlabeled Data [29.617810881312867]
We propose a PU classifier with asymmetric loss (PUAL)
We develop a kernel-based algorithm to enable PUAL to obtain non-linear decision boundary.
We show that, through experiments on both simulated and real-world datasets, PUAL can achieve satisfactory classification on trifurcate data.
arXiv Detail & Related papers (2024-05-31T16:18:06Z) - Informed Decision-Making through Advancements in Open Set Recognition and Unknown Sample Detection [0.0]
Open set recognition (OSR) aims to bring classification tasks in a situation that is more like reality.
This study provides an algorithm exploring a new representation of feature space to improve classification in OSR tasks.
arXiv Detail & Related papers (2024-05-09T15:15:34Z) - Anomaly Detection Under Uncertainty Using Distributionally Robust
Optimization Approach [0.9217021281095907]
Anomaly detection is defined as the problem of finding data points that do not follow the patterns of the majority.
The one-class Support Vector Machines (SVM) method aims to find a decision boundary to distinguish between normal data points and anomalies.
A distributionally robust chance-constrained model is proposed in which the probability of misclassification is low.
arXiv Detail & Related papers (2023-12-03T06:13:22Z) - Open World Classification with Adaptive Negative Samples [89.2422451410507]
Open world classification is a task in natural language processing with key practical relevance and impact.
We propose an approach based on underlineadaptive underlinesamples (ANS) designed to generate effective synthetic open category samples in the training stage.
ANS achieves significant improvements over state-of-the-art methods.
arXiv Detail & Related papers (2023-03-09T21:12:46Z) - Prompt-driven efficient Open-set Semi-supervised Learning [52.30303262499391]
Open-set semi-supervised learning (OSSL) has attracted growing interest, which investigates a more practical scenario where out-of-distribution (OOD) samples are only contained in unlabeled data.
We propose a prompt-driven efficient OSSL framework, called OpenPrompt, which can propagate class information from labeled to unlabeled data with only a small number of trainable parameters.
arXiv Detail & Related papers (2022-09-28T16:25:08Z) - Learning from Positive and Unlabeled Data with Augmented Classes [17.97372291914351]
We propose an unbiased risk estimator for PU learning with Augmented Classes (PUAC)
We derive the estimation error bound for the proposed estimator, which provides a theoretical guarantee for its convergence to the optimal solution.
arXiv Detail & Related papers (2022-07-27T03:40:50Z) - Continual Learning For On-Device Environmental Sound Classification [63.81276321857279]
We propose a simple and efficient continual learning method for on-device environmental sound classification.
Our method selects the historical data for the training by measuring the per-sample classification uncertainty.
arXiv Detail & Related papers (2022-07-15T12:13:04Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Active Weighted Aging Ensemble for Drifted Data Stream Classification [2.277447144331876]
Concept drift destabilizes the performance of the classification model and seriously degrades its quality.
The proposed method has been evaluated through computer experiments using both real and generated data streams.
The results confirm the high quality of the proposed algorithm over state-of-the-art methods.
arXiv Detail & Related papers (2021-12-19T13:52:53Z) - Positive-Unlabeled Classification under Class-Prior Shift: A
Prior-invariant Approach Based on Density Ratio Estimation [85.75352990739154]
We propose a novel PU classification method based on density ratio estimation.
A notable advantage of our proposed method is that it does not require the class-priors in the training phase.
arXiv Detail & Related papers (2021-07-11T13:36:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.