End-to-end Learning, with or without Labels
- URL: http://arxiv.org/abs/1912.12979v1
- Date: Mon, 30 Dec 2019 16:11:40 GMT
- Title: End-to-end Learning, with or without Labels
- Authors: Corinne Jones, Vincent Roulet, Zaid Harchaoui
- Abstract summary: We present an approach for end-to-end learning that allows one to jointly learn a feature representation from unlabeled data.
The proposed approach can be used with any amount of labeled and unlabeled data, gracefully adjusting to the amount of supervision.
- Score: 2.298932494750101
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an approach for end-to-end learning that allows one to jointly
learn a feature representation from unlabeled data (with or without labeled
data) and predict labels for unlabeled data. The feature representation is
assumed to be specified in a differentiable programming framework, that is, as
a parameterized mapping amenable to automatic differentiation. The proposed
approach can be used with any amount of labeled and unlabeled data, gracefully
adjusting to the amount of supervision. We provide experimental results
illustrating the effectiveness of the approach.
Related papers
- You can't handle the (dirty) truth: Data-centric insights improve pseudo-labeling [60.27812493442062]
We show the importance of investigating labeled data quality to improve any pseudo-labeling method.
Specifically, we introduce a novel data characterization and selection framework called DIPS to extend pseudo-labeling.
We demonstrate the applicability and impact of DIPS for various pseudo-labeling methods across an extensive range of real-world datasets.
arXiv Detail & Related papers (2024-06-19T17:58:40Z) - Soft Curriculum for Learning Conditional GANs with Noisy-Labeled and
Uncurated Unlabeled Data [70.25049762295193]
We introduce a novel conditional image generation framework that accepts noisy-labeled and uncurated data during training.
We propose soft curriculum learning, which assigns instance-wise weights for adversarial training while assigning new labels for unlabeled data.
Our experiments show that our approach outperforms existing semi-supervised and label-noise robust methods in terms of both quantitative and qualitative performance.
arXiv Detail & Related papers (2023-07-17T08:31:59Z) - OpenCoS: Contrastive Semi-supervised Learning for Handling Open-set
Unlabeled Data [65.19205979542305]
Unlabeled data may include out-of-class samples in practice.
OpenCoS is a method for handling this realistic semi-supervised learning scenario.
arXiv Detail & Related papers (2021-06-29T06:10:05Z) - GuidedMix-Net: Learning to Improve Pseudo Masks Using Labeled Images as
Reference [153.354332374204]
We propose a novel method for semi-supervised semantic segmentation named GuidedMix-Net.
We first introduce a feature alignment objective between labeled and unlabeled data to capture potentially similar image pairs.
MITrans is shown to be a powerful knowledge module for further progressive refining features of unlabeled data.
Along with supervised learning for labeled data, the prediction of unlabeled data is jointly learned with the generated pseudo masks.
arXiv Detail & Related papers (2021-06-29T02:48:45Z) - A Novel Perspective for Positive-Unlabeled Learning via Noisy Labels [49.990938653249415]
This research presents a methodology that assigns initial pseudo-labels to unlabeled data which is used as noisy-labeled data, and trains a deep neural network using the noisy-labeled data.
Experimental results demonstrate that the proposed method significantly outperforms the state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-03-08T11:46:02Z) - Learning to Learn in a Semi-Supervised Fashion [41.38876517851431]
We present a novel meta-learning scheme to address semi-supervised learning from both labeled and unlabeled data.
Our strategy can be viewed as a self-supervised learning scheme, which can be applied to fully supervised learning tasks.
arXiv Detail & Related papers (2020-08-25T17:59:53Z) - Adversarial Knowledge Transfer from Unlabeled Data [62.97253639100014]
We present a novel Adversarial Knowledge Transfer framework for transferring knowledge from internet-scale unlabeled data to improve the performance of a classifier.
An important novel aspect of our method is that the unlabeled source data can be of different classes from those of the labeled target data, and there is no need to define a separate pretext task.
arXiv Detail & Related papers (2020-08-13T08:04:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.