Teaching an Active Learner with Contrastive Examples
- URL: http://arxiv.org/abs/2110.14888v2
- Date: Fri, 29 Oct 2021 06:09:27 GMT
- Title: Teaching an Active Learner with Contrastive Examples
- Authors: Chaoqi Wang, Adish Singla, Yuxin Chen
- Abstract summary: We study the problem of active learning with the added twist that the learner is assisted by a helpful teacher.
We investigate an efficient teaching algorithm that adaptively picks contrastive examples.
We derive strong performance guarantees for our algorithm based on two problem-dependent parameters.
- Score: 35.926575235046634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of active learning with the added twist that the learner
is assisted by a helpful teacher. We consider the following natural interaction
protocol: At each round, the learner proposes a query asking for the label of
an instance $x^q$, the teacher provides the requested label $\{x^q, y^q\}$
along with explanatory information to guide the learning process. In this
paper, we view this information in the form of an additional contrastive
example ($\{x^c, y^c\}$) where $x^c$ is picked from a set constrained by $x^q$
(e.g., dissimilar instances with the same label). Our focus is to design a
teaching algorithm that can provide an informative sequence of contrastive
examples to the learner to speed up the learning process. We show that this
leads to a challenging sequence optimization problem where the algorithm's
choices at a given round depend on the history of interactions. We investigate
an efficient teaching algorithm that adaptively picks these contrastive
examples. We derive strong performance guarantees for our algorithm based on
two problem-dependent parameters and further show that for specific types of
active learners (e.g., a generalized binary search learner), the proposed
teaching algorithm exhibits strong approximation guarantees. Finally, we
illustrate our bounds and demonstrate the effectiveness of our teaching
framework via two numerical case studies.
Related papers
- Probably Approximately Precision and Recall Learning [62.912015491907994]
Precision and Recall are foundational metrics in machine learning.
One-sided feedback--where only positive examples are observed during training--is inherent in many practical problems.
We introduce a PAC learning framework where each hypothesis is represented by a graph, with edges indicating positive interactions.
arXiv Detail & Related papers (2024-11-20T04:21:07Z) - $Se^2$: Sequential Example Selection for In-Context Learning [83.17038582333716]
Large language models (LLMs) for in-context learning (ICL) need to be activated by demonstration examples.
Prior work has extensively explored the selection of examples for ICL, predominantly following the "select then organize" paradigm.
In this paper, we formulate the problem as a $Se$quential $Se$lection problem and introduce $Se2$, a sequential-aware method.
arXiv Detail & Related papers (2024-02-21T15:35:04Z) - Optimally Teaching a Linear Behavior Cloning Agent [29.290523215922015]
We study optimal teaching of Linear Behavior Cloning (LBC) learners.
In this setup, the teacher can select which states to demonstrate to an LBC learner.
The learner maintains a version space of infinite linear hypotheses consistent with the demonstration.
arXiv Detail & Related papers (2023-11-26T19:47:39Z) - Contextual Bandits and Imitation Learning via Preference-Based Active
Queries [17.73844193143454]
We consider the problem of contextual bandits and imitation learning, where the learner lacks direct knowledge of the executed action's reward.
Instead, the learner can actively query an expert at each round to compare two actions and receive noisy preference feedback.
The learner's objective is two-fold: to minimize the regret associated with the executed actions, while simultaneously, minimizing the number of comparison queries made to the expert.
arXiv Detail & Related papers (2023-07-24T16:36:04Z) - Active Learning Principles for In-Context Learning with Large Language
Models [65.09970281795769]
This paper investigates how Active Learning algorithms can serve as effective demonstration selection methods for in-context learning.
We show that in-context example selection through AL prioritizes high-quality examples that exhibit low uncertainty and bear similarity to the test examples.
arXiv Detail & Related papers (2023-05-23T17:16:04Z) - Active Learning with Label Comparisons [41.82179028046654]
We show that finding the best of $k$ labels can be done with $k-1$ active queries.
Key element in our analysis is the "label neighborhood graph" of the true distribution.
arXiv Detail & Related papers (2022-04-10T12:13:46Z) - A Lagrangian Duality Approach to Active Learning [119.36233726867992]
We consider the batch active learning problem, where only a subset of the training data is labeled.
We formulate the learning problem using constrained optimization, where each constraint bounds the performance of the model on labeled samples.
We show, via numerical experiments, that our proposed approach performs similarly to or better than state-of-the-art active learning methods.
arXiv Detail & Related papers (2022-02-08T19:18:49Z) - Iterative Teaching by Label Synthesis [40.11199328434789]
We propose a label synthesis teaching framework for iterative machine teaching.
We show that this framework can avoid costly example selection while still provably achieving exponential teachability.
arXiv Detail & Related papers (2021-10-27T13:45:29Z) - Learning by Examples Based on Multi-level Optimization [12.317568257671427]
We propose a novel learning approach called Learning By Examples (LBE)
Our approach automatically retrieves a set of training examples that are similar to query examples and predicts labels for query examples by using class labels of the retrieved examples.
We conduct extensive experiments on various benchmarks where the results demonstrate the effectiveness of our method on both supervised and few-shot learning.
arXiv Detail & Related papers (2021-09-22T16:33:06Z) - Distribution Matching for Machine Teaching [64.39292542263286]
Machine teaching is an inverse problem of machine learning that aims at steering the student learner towards its target hypothesis.
Previous studies on machine teaching focused on balancing the teaching risk and cost to find those best teaching examples.
This paper presents a distribution matching-based machine teaching strategy.
arXiv Detail & Related papers (2021-05-06T09:32:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.