Learning by Examples Based on Multi-level Optimization
- URL: http://arxiv.org/abs/2109.10824v1
- Date: Wed, 22 Sep 2021 16:33:06 GMT
- Title: Learning by Examples Based on Multi-level Optimization
- Authors: Shentong Mo, Pengtao Xie
- Abstract summary: We propose a novel learning approach called Learning By Examples (LBE)
Our approach automatically retrieves a set of training examples that are similar to query examples and predicts labels for query examples by using class labels of the retrieved examples.
We conduct extensive experiments on various benchmarks where the results demonstrate the effectiveness of our method on both supervised and few-shot learning.
- Score: 12.317568257671427
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning by examples, which learns to solve a new problem by looking into how
similar problems are solved, is an effective learning method in human learning.
When a student learns a new topic, he/she finds out exemplar topics that are
similar to this new topic and studies the exemplar topics to deepen the
understanding of the new topic. We aim to investigate whether this powerful
learning skill can be borrowed from humans to improve machine learning as well.
In this work, we propose a novel learning approach called Learning By Examples
(LBE). Our approach automatically retrieves a set of training examples that are
similar to query examples and predicts labels for query examples by using class
labels of the retrieved examples. We propose a three-level optimization
framework to formulate LBE which involves three stages of learning: learning a
Siamese network to retrieve similar examples; learning a matching network to
make predictions on query examples by leveraging class labels of retrieved
similar examples; learning the ``ground-truth'' similarities between training
examples by minimizing the validation loss. We develop an efficient algorithm
to solve the LBE problem and conduct extensive experiments on various
benchmarks where the results demonstrate the effectiveness of our method on
both supervised and few-shot learning.
Related papers
- Efficient Imitation Without Demonstrations via Value-Penalized Auxiliary Control from Examples [6.777249026160499]
This work introduces value-penalized auxiliary control from examples (VPACE), an algorithm that improves exploration in example-based control.
We show that VPACE substantially improves learning efficiency for challenging tasks, while maintaining bounded value estimates.
Preliminary results suggest that VPACE may learn more efficiently than the more common approaches of using full trajectories or true sparse rewards.
arXiv Detail & Related papers (2024-07-03T17:54:11Z) - Ticketed Learning-Unlearning Schemes [57.89421552780526]
We propose a new ticketed model for learning--unlearning.
We provide space-efficient ticketed learning--unlearning schemes for a broad family of concept classes.
arXiv Detail & Related papers (2023-06-27T18:54:40Z) - RetICL: Sequential Retrieval of In-Context Examples with Reinforcement Learning [53.52699766206808]
We propose Retrieval for In-Context Learning (RetICL), a learnable method for modeling and optimally selecting examples sequentially for in-context learning.
We evaluate RetICL on math word problem solving and scientific question answering tasks and show that it consistently outperforms or matches and learnable baselines.
arXiv Detail & Related papers (2023-05-23T20:15:56Z) - Active Learning Principles for In-Context Learning with Large Language
Models [65.09970281795769]
This paper investigates how Active Learning algorithms can serve as effective demonstration selection methods for in-context learning.
We show that in-context example selection through AL prioritizes high-quality examples that exhibit low uncertainty and bear similarity to the test examples.
arXiv Detail & Related papers (2023-05-23T17:16:04Z) - ScatterShot: Interactive In-context Example Curation for Text
Transformation [44.9405895390925]
We present ScatterShot, an interactive system for building high-quality demonstration sets for in-context learning.
ScatterShot iteratively slices unlabeled data into task-specific patterns, samples informative inputs from underexplored or not-yet-saturated slices in an active learning manner.
In a user study, ScatterShot greatly helps users in covering different patterns in the input space and labeling in-context examples more efficiently.
arXiv Detail & Related papers (2023-02-14T21:13:31Z) - Budget-aware Few-shot Learning via Graph Convolutional Network [56.41899553037247]
This paper tackles the problem of few-shot learning, which aims to learn new visual concepts from a few examples.
A common problem setting in few-shot classification assumes random sampling strategy in acquiring data labels.
We introduce a new budget-aware few-shot learning problem that aims to learn novel object categories.
arXiv Detail & Related papers (2022-01-07T02:46:35Z) - Teaching an Active Learner with Contrastive Examples [35.926575235046634]
We study the problem of active learning with the added twist that the learner is assisted by a helpful teacher.
We investigate an efficient teaching algorithm that adaptively picks contrastive examples.
We derive strong performance guarantees for our algorithm based on two problem-dependent parameters.
arXiv Detail & Related papers (2021-10-28T05:00:55Z) - Reordering Examples Helps during Priming-based Few-Shot Learning [6.579039107070663]
We show that PERO can learn to generalize efficiently using as few as 10 examples.
We demonstrate the effectiveness of the proposed method on the tasks of sentiment classification, natural language inference and fact retrieval.
arXiv Detail & Related papers (2021-06-03T11:02:36Z) - Few-shot Sequence Learning with Transformers [79.87875859408955]
Few-shot algorithms aim at learning new tasks provided only a handful of training examples.
In this work we investigate few-shot learning in the setting where the data points are sequences of tokens.
We propose an efficient learning algorithm based on Transformers.
arXiv Detail & Related papers (2020-12-17T12:30:38Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.