Positive unlabeled learning with tensor networks
- URL: http://arxiv.org/abs/2211.14085v3
- Date: Thu, 20 Jul 2023 06:42:56 GMT
- Title: Positive unlabeled learning with tensor networks
- Authors: Bojan \v{Z}unkovi\v{c}
- Abstract summary: Positive unlabeled learning is a binary classification problem with positive and unlabeled data.
This work introduces a feature-space distance-based tensor network approach to the positive unlabeled learning problem.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Positive unlabeled learning is a binary classification problem with positive
and unlabeled data. It is common in domains where negative labels are costly or
impossible to obtain, e.g., medicine and personalized advertising. Most
approaches to positive unlabeled learning apply to specific data types (e.g.,
images, categorical data) and can not generate new positive and negative
samples. This work introduces a feature-space distance-based tensor network
approach to the positive unlabeled learning problem. The presented method is
not domain specific and significantly improves the state-of-the-art results on
the MNIST image and 15 categorical/mixed datasets. The trained tensor network
model is also a generative model and enables the generation of new positive and
negative instances.
Related papers
- Robust Positive-Unlabeled Learning via Noise Negative Sample
Self-correction [48.929877651182885]
Learning from positive and unlabeled data is known as positive-unlabeled (PU) learning in literature.
We propose a new robust PU learning method with a training strategy motivated by the nature of human learning.
arXiv Detail & Related papers (2023-08-01T04:34:52Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Learning From Positive and Unlabeled Data Using Observer-GAN [0.0]
A problem of learning from positive and unlabeled data (A.K.A. PU learning) has been studied in a binary (i.e., positive versus negative) classification setting.
Generative Adversarial Networks (GANs) have been used to reduce the problem to the supervised setting with the advantage that supervised learning has state-of-the-art accuracy in classification tasks.
arXiv Detail & Related papers (2022-08-26T07:35:28Z) - Incorporating Semi-Supervised and Positive-Unlabeled Learning for
Boosting Full Reference Image Quality Assessment [73.61888777504377]
Full-reference (FR) image quality assessment (IQA) evaluates the visual quality of a distorted image by measuring its perceptual difference with pristine-quality reference.
Unlabeled data can be easily collected from an image degradation or restoration process, making it encouraging to exploit unlabeled training data to boost FR-IQA performance.
In this paper, we suggest to incorporate semi-supervised and positive-unlabeled (PU) learning for exploiting unlabeled data while mitigating the adverse effect of outliers.
arXiv Detail & Related papers (2022-04-19T09:10:06Z) - Adaptive Positive-Unlabelled Learning via Markov Diffusion [0.0]
Positive-Unlabelled (PU) learning is the machine learning setting in which only a set of positive instances are labelled.
The principal aim of the algorithm is to identify a set of instances which are likely to contain positive instances that were originally unlabelled.
arXiv Detail & Related papers (2021-08-13T10:25:47Z) - A Novel Perspective for Positive-Unlabeled Learning via Noisy Labels [49.990938653249415]
This research presents a methodology that assigns initial pseudo-labels to unlabeled data which is used as noisy-labeled data, and trains a deep neural network using the noisy-labeled data.
Experimental results demonstrate that the proposed method significantly outperforms the state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-03-08T11:46:02Z) - Learning from Positive and Unlabeled Data with Arbitrary Positive Shift [11.663072799764542]
This paper shows that PU learning is possible even with arbitrarily non-representative positive data given unlabeled data.
We integrate this into two statistically consistent methods to address arbitrary positive bias.
Experimental results demonstrate our methods' effectiveness across numerous real-world datasets.
arXiv Detail & Related papers (2020-02-24T13:53:22Z) - Learning with Out-of-Distribution Data for Audio Classification [60.48251022280506]
We show that detecting and relabelling certain OOD instances, rather than discarding them, can have a positive effect on learning.
The proposed method is shown to improve the performance of convolutional neural networks by a significant margin.
arXiv Detail & Related papers (2020-02-11T21:08:06Z) - On Positive-Unlabeled Classification in GAN [130.43248168149432]
This paper defines a positive and unlabeled classification problem for standard GANs.
It then leads to a novel technique to stabilize the training of the discriminator in GANs.
arXiv Detail & Related papers (2020-02-04T05:59:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.