Nonparametric Teaching for Multiple Learners
- URL: http://arxiv.org/abs/2311.10318v1
- Date: Fri, 17 Nov 2023 04:04:11 GMT
- Title: Nonparametric Teaching for Multiple Learners
- Authors: Chen Zhang, Xiaofeng Cao, Weiyang Liu, Ivor Tsang, James Kwok
- Abstract summary: We introduce a novel framework -- Multi-learner Nonparametric Teaching (MINT)
MINT aims to instruct multiple learners, with each learner focusing on learning a scalar-valued target model.
We demonstrate that MINT offers significant teaching speed-up over repeated single-learner teaching.
- Score: 20.75580803325611
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of teaching multiple learners simultaneously in the
nonparametric iterative teaching setting, where the teacher iteratively
provides examples to the learner for accelerating the acquisition of a target
concept. This problem is motivated by the gap between current single-learner
teaching setting and the real-world scenario of human instruction where a
teacher typically imparts knowledge to multiple students. Under the new problem
formulation, we introduce a novel framework -- Multi-learner Nonparametric
Teaching (MINT). In MINT, the teacher aims to instruct multiple learners, with
each learner focusing on learning a scalar-valued target model. To achieve
this, we frame the problem as teaching a vector-valued target model and extend
the target model space from a scalar-valued reproducing kernel Hilbert space
used in single-learner scenarios to a vector-valued space. Furthermore, we
demonstrate that MINT offers significant teaching speed-up over repeated
single-learner teaching, particularly when the multiple learners can
communicate with each other. Lastly, we conduct extensive experiments to
validate the practicality and efficiency of MINT.
Related papers
- Heuristic-Free Multi-Teacher Learning [0.6597195879147557]
Teacher2Task is a novel framework for multi-teacher learning that eliminates the need for manual aggregations.
Instead of relying on aggregated labels, the framework transforms the training data, consisting of ground truth labels and annotations from N teachers, into N+1 distinct tasks.
arXiv Detail & Related papers (2024-11-19T18:45:16Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Distantly-Supervised Named Entity Recognition with Adaptive Teacher
Learning and Fine-grained Student Ensemble [56.705249154629264]
Self-training teacher-student frameworks are proposed to improve the robustness of NER models.
In this paper, we propose an adaptive teacher learning comprised of two teacher-student networks.
Fine-grained student ensemble updates each fragment of the teacher model with a temporal moving average of the corresponding fragment of the student, which enhances consistent predictions on each model fragment against noise.
arXiv Detail & Related papers (2022-12-13T12:14:09Z) - One-shot Machine Teaching: Cost Very Few Examples to Converge Faster [45.96956111867065]
We consider a more intelligent teaching paradigm named one-shot machine teaching.
It establishes a tractable mapping from the teaching set to the model parameter.
We prove that this mapping is surjective, which serves to an existence guarantee of the optimal teaching set.
arXiv Detail & Related papers (2022-12-13T07:51:17Z) - Iterative Teacher-Aware Learning [136.05341445369265]
In human pedagogy, teachers and students can interact adaptively to maximize communication efficiency.
We propose a gradient optimization based teacher-aware learner who can incorporate teacher's cooperative intention into the likelihood function.
arXiv Detail & Related papers (2021-10-01T00:27:47Z) - Distribution Matching for Machine Teaching [64.39292542263286]
Machine teaching is an inverse problem of machine learning that aims at steering the student learner towards its target hypothesis.
Previous studies on machine teaching focused on balancing the teaching risk and cost to find those best teaching examples.
This paper presents a distribution matching-based machine teaching strategy.
arXiv Detail & Related papers (2021-05-06T09:32:57Z) - Adaptive Multi-Teacher Multi-level Knowledge Distillation [11.722728148523366]
We propose a novel adaptive multi-teacher multi-level knowledge distillation learning framework(AMTML-KD)
It consists two novel insights: (i) associating each teacher with a latent representation to adaptively learn instance-level teacher importance weights.
As such, a student model can learn multi-level knowledge from multiple teachers through AMTML-KD.
arXiv Detail & Related papers (2021-03-06T08:18:16Z) - Teaching to Learn: Sequential Teaching of Agents with Inner States [20.556373950863247]
We introduce a multi-agent formulation in which learners' inner state may change with the teaching interaction.
In order to teach such learners, we propose an optimal control approach that takes the future performance of the learner after teaching into account.
arXiv Detail & Related papers (2020-09-14T07:03:15Z) - The Sample Complexity of Teaching-by-Reinforcement on Q-Learning [40.37954633873304]
We study the sample complexity of teaching, termed as "teaching dimension" (TDim) in the literature, for the teaching-by-reinforcement paradigm.
In this paper, we focus on a specific family of reinforcement learning algorithms, Q-learning, and characterize the TDim under different teachers with varying control power over the environment.
Our TDim results provide the minimum number of samples needed for reinforcement learning, and we discuss their connections to standard PAC-style RL sample complexity and teaching-by-demonstration sample complexity results.
arXiv Detail & Related papers (2020-06-16T17:06:04Z) - Neural Multi-Task Learning for Teacher Question Detection in Online
Classrooms [50.19997675066203]
We build an end-to-end neural framework that automatically detects questions from teachers' audio recordings.
By incorporating multi-task learning techniques, we are able to strengthen the understanding of semantic relations among different types of questions.
arXiv Detail & Related papers (2020-05-16T02:17:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.