Learning to Learn in a Semi-Supervised Fashion
- URL: http://arxiv.org/abs/2008.11203v1
- Date: Tue, 25 Aug 2020 17:59:53 GMT
- Title: Learning to Learn in a Semi-Supervised Fashion
- Authors: Yun-Chun Chen, Chao-Te Chou, Yu-Chiang Frank Wang
- Abstract summary: We present a novel meta-learning scheme to address semi-supervised learning from both labeled and unlabeled data.
Our strategy can be viewed as a self-supervised learning scheme, which can be applied to fully supervised learning tasks.
- Score: 41.38876517851431
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To address semi-supervised learning from both labeled and unlabeled data, we
present a novel meta-learning scheme. We particularly consider that labeled and
unlabeled data share disjoint ground truth label sets, which can be seen tasks
like in person re-identification or image retrieval. Our learning scheme
exploits the idea of leveraging information from labeled to unlabeled data.
Instead of fitting the associated class-wise similarity scores as most
meta-learning algorithms do, we propose to derive semantics-oriented similarity
representations from labeled data, and transfer such representation to
unlabeled ones. Thus, our strategy can be viewed as a self-supervised learning
scheme, which can be applied to fully supervised learning tasks for improved
performance. Our experiments on various tasks and settings confirm the
effectiveness of our proposed approach and its superiority over the
state-of-the-art methods.
Related papers
- Semi-Supervised Variational Adversarial Active Learning via Learning to Rank and Agreement-Based Pseudo Labeling [6.771578432805963]
Active learning aims to alleviate the amount of labor involved in data labeling by automating the selection of unlabeled samples.
We introduce novel techniques that significantly improve the use of abundant unlabeled data during training.
We demonstrate the superior performance of our approach over the state of the art on various image classification and segmentation benchmark datasets.
arXiv Detail & Related papers (2024-08-23T00:35:07Z) - A review on discriminative self-supervised learning methods [6.24302896438145]
Self-supervised learning has emerged as a method to extract robust features from unlabeled data.
This paper provides a review of discriminative approaches of self-supervised learning within the domain of computer vision.
arXiv Detail & Related papers (2024-05-08T11:15:20Z) - Leveraging Ensembles and Self-Supervised Learning for Fully-Unsupervised
Person Re-Identification and Text Authorship Attribution [77.85461690214551]
Learning from fully-unlabeled data is challenging in Multimedia Forensics problems, such as Person Re-Identification and Text Authorship Attribution.
Recent self-supervised learning methods have shown to be effective when dealing with fully-unlabeled data in cases where the underlying classes have significant semantic differences.
We propose a strategy to tackle Person Re-Identification and Text Authorship Attribution by enabling learning from unlabeled data even when samples from different classes are not prominently diverse.
arXiv Detail & Related papers (2022-02-07T13:08:11Z) - Open-Set Representation Learning through Combinatorial Embedding [62.05670732352456]
We are interested in identifying novel concepts in a dataset through representation learning based on the examples in both labeled and unlabeled classes.
We propose a learning approach, which naturally clusters examples in unseen classes using the compositional knowledge given by multiple supervised meta-classifiers on heterogeneous label spaces.
The proposed algorithm discovers novel concepts via a joint optimization of enhancing the discrimitiveness of unseen classes as well as learning the representations of known classes generalizable to novel ones.
arXiv Detail & Related papers (2021-06-29T11:51:57Z) - OpenCoS: Contrastive Semi-supervised Learning for Handling Open-set
Unlabeled Data [65.19205979542305]
Unlabeled data may include out-of-class samples in practice.
OpenCoS is a method for handling this realistic semi-supervised learning scenario.
arXiv Detail & Related papers (2021-06-29T06:10:05Z) - Adversarial Knowledge Transfer from Unlabeled Data [62.97253639100014]
We present a novel Adversarial Knowledge Transfer framework for transferring knowledge from internet-scale unlabeled data to improve the performance of a classifier.
An important novel aspect of our method is that the unlabeled source data can be of different classes from those of the labeled target data, and there is no need to define a separate pretext task.
arXiv Detail & Related papers (2020-08-13T08:04:27Z) - Automatically Discovering and Learning New Visual Categories with
Ranking Statistics [145.89790963544314]
We tackle the problem of discovering novel classes in an image collection given labelled examples of other classes.
We learn a general-purpose clustering model and use the latter to identify the new classes in the unlabelled data.
We evaluate our approach on standard classification benchmarks and outperform current methods for novel category discovery by a significant margin.
arXiv Detail & Related papers (2020-02-13T18:53:32Z) - Active Learning for Entity Alignment [25.234850999782953]
We show how the labeling of entity alignments is different from assigning class labels to single instances.
One of our main findings is that passive learning approaches, which can be efficiently precomputed and deployed more easily, achieve performance comparable to the active learning strategies.
arXiv Detail & Related papers (2020-01-24T10:33:08Z) - End-to-end Learning, with or without Labels [2.298932494750101]
We present an approach for end-to-end learning that allows one to jointly learn a feature representation from unlabeled data.
The proposed approach can be used with any amount of labeled and unlabeled data, gracefully adjusting to the amount of supervision.
arXiv Detail & Related papers (2019-12-30T16:11:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.