Moderately Supervised Learning: Definition, Framework and Generality
- URL: http://arxiv.org/abs/2008.11945v6
- Date: Wed, 31 Jan 2024 04:45:32 GMT
- Title: Moderately Supervised Learning: Definition, Framework and Generality
- Authors: Yongquan Yang
- Abstract summary: This article expands the categorization of supervised learning (SL) and investigates the sub-type moderately supervised learning (MSL)
MSL concerns the situation where the given labels are ideal, but due to the simplicity in annotation, careful designs are required to transform the given labels into easy-to-learn targets.
This paper also establishes a tutorial for AI application engineers to refer to viewing a problem to be solved from the mathematicians' vision.
- Score: 1.19658449368018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning with supervision has achieved remarkable success in numerous
artificial intelligence (AI) applications. In the current literature, by
referring to the properties of the labels prepared for the training dataset,
learning with supervision is categorized as supervised learning (SL) and weakly
supervised learning (WSL). SL concerns the situation where the training data
set is assigned with ideal (complete, exact and accurate) labels, while WSL
concerns the situation where the training data set is assigned with non-ideal
(incomplete, inexact or inaccurate) labels. However, various solutions for SL
tasks have shown that the given labels are not always easy to learn, and the
transformation from the given labels to easy-to-learn targets can significantly
affect the performance of the final SL solutions. Without considering the
properties of the transformation from the given labels to easy-to-learn
targets, the definition of SL conceals some details that can be critical to
building the appropriate solutions for specific SL tasks. Thus, for engineers
in the AI application field, it is desirable to reveal these details
systematically. This article attempts to achieve this goal by expanding the
categorization of SL and investigating the sub-type moderately supervised
learning (MSL) that concerns the situation where the given labels are ideal,
but due to the simplicity in annotation, careful designs are required to
transform the given labels into easy-to-learn targets. From the perspectives of
the definition, framework and generality, we conceptualize MSL to present a
complete fundamental basis to systematically analyse MSL tasks. At meantime,
revealing the relation between the conceptualization of MSL and the
mathematicians' vision, this paper as well establishes a tutorial for AI
application engineers to refer to viewing a problem to be solved from the
mathematicians' vision.
Related papers
- Memorization in Self-Supervised Learning Improves Downstream Generalization [49.42010047574022]
Self-supervised learning (SSL) has recently received significant attention due to its ability to train high-performance encoders purely on unlabeled data.
We propose SSLMem, a framework for defining memorization within SSL.
arXiv Detail & Related papers (2024-01-19T11:32:47Z) - FlexSSL : A Generic and Efficient Framework for Semi-Supervised Learning [19.774959310191623]
We develop a generic and efficient learning framework called FlexSSL.
We show that FlexSSL can consistently enhance the performance of semi-supervised learning algorithms.
arXiv Detail & Related papers (2023-12-28T08:31:56Z) - Slot Induction via Pre-trained Language Model Probing and Multi-level
Contrastive Learning [62.839109775887025]
Slot Induction (SI) task whose objective is to induce slot boundaries without explicit knowledge of token-level slot annotations.
We propose leveraging Unsupervised Pre-trained Language Model (PLM) Probing and Contrastive Learning mechanism to exploit unsupervised semantic knowledge extracted from PLM.
Our approach is shown to be effective in SI task and capable of bridging the gaps with token-level supervised models on two NLU benchmark datasets.
arXiv Detail & Related papers (2023-08-09T05:08:57Z) - Reverse Engineering Self-Supervised Learning [17.720366509919167]
Self-supervised learning (SSL) is a powerful tool in machine learning.
This paper presents an in-depth empirical analysis of SSL-trained representations.
arXiv Detail & Related papers (2023-05-24T23:15:28Z) - Active Self-Supervised Learning: A Few Low-Cost Relationships Are All
You Need [34.013568381942775]
Self-Supervised Learning (SSL) has emerged as the solution of choice to learn transferable representations from unlabeled data.
In this work, we formalize and generalize this principle through Positive Active Learning (PAL) where an oracle queries semantic relationships between samples.
First, it unveils a theoretically grounded learning framework beyond SSL, based on similarity graphs, that can be extended to tackle supervised and semi-supervised learning depending on the employed oracle.
Second, it provides a consistent algorithm to embed a priori knowledge, e.g. some observed labels, into any SSL losses without any change in the training pipeline.
arXiv Detail & Related papers (2023-03-27T14:44:39Z) - Robust Meta-Representation Learning via Global Label Inference and
Classification [42.81340522184904]
We introduce Meta Label Learning (MeLa), a novel meta-learning algorithm that learns task relations by inferring global labels across tasks.
MeLa outperforms existing methods across a diverse range of benchmarks, in particular under a more challenging setting where the number of training tasks is limited and labels are task-specific.
arXiv Detail & Related papers (2022-12-22T13:46:47Z) - Improving Self-Supervised Learning by Characterizing Idealized
Representations [155.1457170539049]
We prove necessary and sufficient conditions for any task invariant to given data augmentations.
For contrastive learning, our framework prescribes simple but significant improvements to previous methods.
For non-contrastive learning, we use our framework to derive a simple and novel objective.
arXiv Detail & Related papers (2022-09-13T18:01:03Z) - Robust Deep Semi-Supervised Learning: A Brief Introduction [63.09703308309176]
Semi-supervised learning (SSL) aims to improve learning performance by leveraging unlabeled data when labels are insufficient.
SSL with deep models has proven to be successful on standard benchmark tasks.
However, they are still vulnerable to various robustness threats in real-world applications.
arXiv Detail & Related papers (2022-02-12T04:16:41Z) - The Role of Global Labels in Few-Shot Classification and How to Infer
Them [55.64429518100676]
Few-shot learning is a central problem in meta-learning, where learners must quickly adapt to new tasks.
We propose Meta Label Learning (MeLa), a novel algorithm that infers global labels and obtains robust few-shot models via standard classification.
arXiv Detail & Related papers (2021-08-09T14:07:46Z) - Graph-based Semi-supervised Learning: A Comprehensive Review [51.26862262550445]
Semi-supervised learning (SSL) has tremendous value in practice due to its ability to utilize both labeled data and unlabelled data.
An important class of SSL methods is to naturally represent data as graphs, which corresponds to graph-based semi-supervised learning (GSSL) methods.
GSSL methods have demonstrated their advantages in various domains due to their uniqueness of structure, the universality of applications, and their scalability to large scale data.
arXiv Detail & Related papers (2021-02-26T05:11:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.