Pseudo-Representation Labeling Semi-Supervised Learning
- URL: http://arxiv.org/abs/2006.00429v1
- Date: Sun, 31 May 2020 03:55:41 GMT
- Title: Pseudo-Representation Labeling Semi-Supervised Learning
- Authors: Song-Bo Yang, Tian-li Yu
- Abstract summary: In recent years, semi-supervised learning has shown tremendous success in leveraging unlabeled data to improve the performance of deep learning models.
This work proposes the pseudo-representation labeling, a simple and flexible framework that utilizes pseudo-labeling techniques to iteratively label a small amount of unlabeled data and use them as training data.
Compared with the existing approaches, the pseudo-representation labeling is more intuitive and can effectively solve practical problems in the real world.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, semi-supervised learning (SSL) has shown tremendous success
in leveraging unlabeled data to improve the performance of deep learning
models, which significantly reduces the demand for large amounts of labeled
data. Many SSL techniques have been proposed and have shown promising
performance on famous datasets such as ImageNet and CIFAR-10. However, some
exiting techniques (especially data augmentation based) are not suitable for
industrial applications empirically. Therefore, this work proposes the
pseudo-representation labeling, a simple and flexible framework that utilizes
pseudo-labeling techniques to iteratively label a small amount of unlabeled
data and use them as training data. In addition, our framework is integrated
with self-supervised representation learning such that the classifier gains
benefits from representation learning of both labeled and unlabeled data. This
framework can be implemented without being limited at the specific model
structure, but a general technique to improve the existing model. Compared with
the existing approaches, the pseudo-representation labeling is more intuitive
and can effectively solve practical problems in the real world. Empirically, it
outperforms the current state-of-the-art semi-supervised learning methods in
industrial types of classification problems such as the WM-811K wafer map and
the MIT-BIH Arrhythmia dataset.
Related papers
- Learning with Less: Knowledge Distillation from Large Language Models via Unlabeled Data [54.934578742209716]
In real-world NLP applications, Large Language Models (LLMs) offer promising solutions due to their extensive training on vast datasets.
LLKD is an adaptive sample selection method that incorporates signals from both the teacher and student.
Our comprehensive experiments show that LLKD achieves superior performance across various datasets with higher data efficiency.
arXiv Detail & Related papers (2024-11-12T18:57:59Z) - Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition [50.61991746981703]
Current state-of-the-art LTSSL approaches rely on high-quality pseudo-labels for large-scale unlabeled data.
This paper introduces a novel probabilistic framework that unifies various recent proposals in long-tail learning.
We introduce a continuous contrastive learning method, CCL, extending our framework to unlabeled data using reliable and smoothed pseudo-labels.
arXiv Detail & Related papers (2024-10-08T15:06:10Z) - Persistent Laplacian-enhanced Algorithm for Scarcely Labeled Data
Classification [2.8360662552057323]
We propose a semi-supervised method called persistent Laplacian-enhanced graph MBO (PL-MBO)
PL-MBO integrates persistent spectral graph theory with the classical Merriman-Bence- Osher scheme.
We evaluate the performance of the proposed method on data classification.
arXiv Detail & Related papers (2023-05-25T16:49:40Z) - A Benchmark Generative Probabilistic Model for Weak Supervised Learning [2.0257616108612373]
Weak Supervised Learning approaches have been developed to alleviate the annotation burden.
We show that latent variable models (PLVMs) achieve state-of-the-art performance across four datasets.
arXiv Detail & Related papers (2023-03-31T07:06:24Z) - Representation Learning for the Automatic Indexing of Sound Effects
Libraries [79.68916470119743]
We show that a task-specific but dataset-independent representation can successfully address data issues such as class imbalance, inconsistent class labels, and insufficient dataset size.
Detailed experimental results show the impact of metric learning approaches and different cross-dataset training methods on representational effectiveness.
arXiv Detail & Related papers (2022-08-18T23:46:13Z) - L2B: Learning to Bootstrap Robust Models for Combating Label Noise [52.02335367411447]
This paper introduces a simple and effective method, named Learning to Bootstrap (L2B)
It enables models to bootstrap themselves using their own predictions without being adversely affected by erroneous pseudo-labels.
It achieves this by dynamically adjusting the importance weight between real observed and generated labels, as well as between different samples through meta-learning.
arXiv Detail & Related papers (2022-02-09T05:57:08Z) - Universalizing Weak Supervision [18.832796698152492]
We propose a universal technique that enables weak supervision over any label type.
We apply this technique to important problems previously not tackled by WS frameworks including learning to rank, regression, and learning in hyperbolic space.
arXiv Detail & Related papers (2021-12-07T17:59:10Z) - Dash: Semi-Supervised Learning with Dynamic Thresholding [72.74339790209531]
We propose a semi-supervised learning (SSL) approach that uses unlabeled examples to train models.
Our proposed approach, Dash, enjoys its adaptivity in terms of unlabeled data selection.
arXiv Detail & Related papers (2021-09-01T23:52:29Z) - Adversarial Knowledge Transfer from Unlabeled Data [62.97253639100014]
We present a novel Adversarial Knowledge Transfer framework for transferring knowledge from internet-scale unlabeled data to improve the performance of a classifier.
An important novel aspect of our method is that the unlabeled source data can be of different classes from those of the labeled target data, and there is no need to define a separate pretext task.
arXiv Detail & Related papers (2020-08-13T08:04:27Z) - DEAL: Deep Evidential Active Learning for Image Classification [0.0]
Active Learning (AL) is one approach to mitigate the problem of limited labeled data.
Recent AL methods for CNNs propose different solutions for the selection of instances to be labeled.
We propose a novel AL algorithm that efficiently learns from unlabeled data by capturing high prediction uncertainty.
arXiv Detail & Related papers (2020-07-22T11:14:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.