Iterative Teaching by Label Synthesis
- URL: http://arxiv.org/abs/2110.14432v1
- Date: Wed, 27 Oct 2021 13:45:29 GMT
- Title: Iterative Teaching by Label Synthesis
- Authors: Weiyang Liu, Zhen Liu, Hanchen Wang, Liam Paull, Bernhard Sch\"olkopf,
Adrian Weller
- Abstract summary: We propose a label synthesis teaching framework for iterative machine teaching.
We show that this framework can avoid costly example selection while still provably achieving exponential teachability.
- Score: 40.11199328434789
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider the problem of iterative machine teaching, where a
teacher provides examples sequentially based on the current iterative learner.
In contrast to previous methods that have to scan over the entire pool and
select teaching examples from it in each iteration, we propose a label
synthesis teaching framework where the teacher randomly selects input teaching
examples (e.g., images) and then synthesizes suitable outputs (e.g., labels)
for them. We show that this framework can avoid costly example selection while
still provably achieving exponential teachability. We propose multiple novel
teaching algorithms in this framework. Finally, we empirically demonstrate the
value of our framework.
Related papers
- Iterative Teaching by Data Hallucination [37.246902903546896]
We consider the problem of iterative machine teaching, where a teacher sequentially provides examples based on the status of a learner.
We propose data hallucination teaching (DHT) where the teacher can generate input data intelligently based on labels, the learner's status and the target concept.
arXiv Detail & Related papers (2022-10-31T16:48:47Z) - Label Matching Semi-Supervised Object Detection [85.99282969977541]
Semi-supervised object detection has made significant progress with the development of mean teacher driven self-training.
Label mismatch problem is not yet fully explored in the previous works, leading to severe confirmation bias during self-training.
We propose a simple yet effective LabelMatch framework from two different yet complementary perspectives.
arXiv Detail & Related papers (2022-06-14T05:59:41Z) - Fine-Grained Visual Entailment [51.66881737644983]
We propose an extension of this task, where the goal is to predict the logical relationship of fine-grained knowledge elements within a piece of text to an image.
Unlike prior work, our method is inherently explainable and makes logical predictions at different levels of granularity.
We evaluate our method on a new dataset of manually annotated knowledge elements and show that our method achieves 68.18% accuracy at this challenging task.
arXiv Detail & Related papers (2022-03-29T16:09:38Z) - Resolving label uncertainty with implicit posterior models [71.62113762278963]
We propose a method for jointly inferring labels across a collection of data samples.
By implicitly assuming the existence of a generative model for which a differentiable predictor is the posterior, we derive a training objective that allows learning under weak beliefs.
arXiv Detail & Related papers (2022-02-28T18:09:44Z) - Teaching an Active Learner with Contrastive Examples [35.926575235046634]
We study the problem of active learning with the added twist that the learner is assisted by a helpful teacher.
We investigate an efficient teaching algorithm that adaptively picks contrastive examples.
We derive strong performance guarantees for our algorithm based on two problem-dependent parameters.
arXiv Detail & Related papers (2021-10-28T05:00:55Z) - Learning by Examples Based on Multi-level Optimization [12.317568257671427]
We propose a novel learning approach called Learning By Examples (LBE)
Our approach automatically retrieves a set of training examples that are similar to query examples and predicts labels for query examples by using class labels of the retrieved examples.
We conduct extensive experiments on various benchmarks where the results demonstrate the effectiveness of our method on both supervised and few-shot learning.
arXiv Detail & Related papers (2021-09-22T16:33:06Z) - Reordering Examples Helps during Priming-based Few-Shot Learning [6.579039107070663]
We show that PERO can learn to generalize efficiently using as few as 10 examples.
We demonstrate the effectiveness of the proposed method on the tasks of sentiment classification, natural language inference and fact retrieval.
arXiv Detail & Related papers (2021-06-03T11:02:36Z) - Distribution Matching for Machine Teaching [64.39292542263286]
Machine teaching is an inverse problem of machine learning that aims at steering the student learner towards its target hypothesis.
Previous studies on machine teaching focused on balancing the teaching risk and cost to find those best teaching examples.
This paper presents a distribution matching-based machine teaching strategy.
arXiv Detail & Related papers (2021-05-06T09:32:57Z) - Noisy Self-Knowledge Distillation for Text Summarization [83.49809205891496]
We apply self-knowledge distillation to text summarization which we argue can alleviate problems with maximum-likelihood training.
Our student summarization model is trained with guidance from a teacher which generates smoothed labels to help regularize training.
We demonstrate experimentally on three benchmarks that our framework boosts the performance of both pretrained and non-pretrained summarizers.
arXiv Detail & Related papers (2020-09-15T12:53:09Z) - Iterative Machine Teaching without Teachers [12.239246363539634]
Existing studies on iterative machine teaching assume that there are teachers who know the true answers of all teaching examples.
In this study, we consider an unsupervised case where such teachers do not exist.
Students are given a teaching example at each iteration, but there is no guarantee if the corresponding label is correct.
arXiv Detail & Related papers (2020-06-27T11:21:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.