Pedagogical Demonstrations and Pragmatic Learning in Artificial
Tutor-Learner Interactions
- URL: http://arxiv.org/abs/2203.00111v2
- Date: Wed, 27 Sep 2023 07:55:54 GMT
- Title: Pedagogical Demonstrations and Pragmatic Learning in Artificial
Tutor-Learner Interactions
- Authors: Hugo Caselles-Dupr\'e, Mohamed Chetouani, Olivier Sigaud
- Abstract summary: In this paper, we investigate the implementation of such mechanisms in a tutor-learner setup where both participants are artificial agents in an environment with multiple goals.
Using pedagogy from the tutor and pragmatism from the learner, we show substantial improvements over standard learning from demonstrations.
- Score: 8.715518445626826
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When demonstrating a task, human tutors pedagogically modify their behavior
by either "showing" the task rather than just "doing" it (exaggerating on
relevant parts of the demonstration) or by giving demonstrations that best
disambiguate the communicated goal. Analogously, human learners pragmatically
infer the communicative intent of the tutor: they interpret what the tutor is
trying to teach them and deduce relevant information for learning. Without such
mechanisms, traditional Learning from Demonstration (LfD) algorithms will
consider such demonstrations as sub-optimal. In this paper, we investigate the
implementation of such mechanisms in a tutor-learner setup where both
participants are artificial agents in an environment with multiple goals. Using
pedagogy from the tutor and pragmatism from the learner, we show substantial
improvements over standard learning from demonstrations.
Related papers
- Representational Alignment Supports Effective Machine Teaching [81.19197059407121]
We integrate insights from machine teaching and pragmatic communication with the literature on representational alignment.
We design a supervised learning environment that disentangles representational alignment from teacher accuracy.
arXiv Detail & Related papers (2024-06-06T17:48:24Z) - Self-Explainable Affordance Learning with Embodied Caption [63.88435741872204]
We introduce Self-Explainable Affordance learning (SEA) with embodied caption.
SEA enables robots to articulate their intentions and bridge the gap between explainable vision-language caption and visual affordance learning.
We propose a novel model to effectively combine affordance grounding with self-explanation in a simple but efficient manner.
arXiv Detail & Related papers (2024-04-08T15:22:38Z) - Revealing Networks: Understanding Effective Teacher Practices in
AI-Supported Classrooms using Transmodal Ordered Network Analysis [0.9187505256430948]
The present study uses transmodal ordered network analysis to understand effective teacher practices in relationship to traditional metrics of in-system learning in a mathematics classroom working with AI tutors.
Comparing teacher practices by student learning rates, we find that students with low learning rates exhibited more hint use after monitoring.
Students with low learning rates showed learning behavior similar to their high learning rate peers, achieving repeated correct attempts in the tutor.
arXiv Detail & Related papers (2023-12-17T21:50:02Z) - A Survey of Demonstration Learning [0.0]
Demonstration Learning is a paradigm in which an agent learns to perform a task by imitating the behavior of an expert shown in demonstrations.
It is gaining significant traction due to having tremendous potential for learning complex behaviors from demonstrations.
Due to learning without interacting with the environment, demonstration learning would allow the automation of a wide range of real world applications such as robotics and healthcare.
arXiv Detail & Related papers (2023-03-20T15:22:10Z) - Opportunities and Challenges in Neural Dialog Tutoring [54.07241332881601]
We rigorously analyze various generative language models on two dialog tutoring datasets for language learning.
We find that although current approaches can model tutoring in constrained learning scenarios, they perform poorly in less constrained scenarios.
Our human quality evaluation shows that both models and ground-truth annotations exhibit low performance in terms of equitable tutoring.
arXiv Detail & Related papers (2023-01-24T11:00:17Z) - Overcoming Referential Ambiguity in Language-Guided Goal-Conditioned
Reinforcement Learning [8.715518445626826]
The learner can misunderstand the teacher's intentions if the instruction ambiguously refer to features of the object.
We study how two concepts derived from cognitive sciences can help resolve those referential ambiguities.
We apply those ideas to a teacher/learner setup with two artificial agents on a simulated robotic task.
arXiv Detail & Related papers (2022-09-26T15:07:59Z) - Homomorphism Autoencoder -- Learning Group Structured Representations from Observed Transitions [51.71245032890532]
We propose methods enabling an agent acting upon the world to learn internal representations of sensory information consistent with actions that modify it.
In contrast to existing work, our approach does not require prior knowledge of the group and does not restrict the set of actions the agent can perform.
arXiv Detail & Related papers (2022-07-25T11:22:48Z) - Pragmatically Learning from Pedagogical Demonstrations in Multi-Goal
Environments [8.715518445626826]
We implement pedagogy and pragmatism mechanisms by leveraging a Bayesian model of Goal Inference from demonstrations (BGI)
We show that combining BGI-agents (a pedagogical teacher and a pragmatic learner) results in faster learning and reduced goal ambiguity over standard learning from demonstrations.
arXiv Detail & Related papers (2022-06-09T14:51:25Z) - Rethinking the Role of Demonstrations: What Makes In-Context Learning
Work? [112.72413411257662]
Large language models (LMs) are able to in-context learn by conditioning on a few input-label pairs (demonstrations) and making predictions for new inputs.
We show that ground truth demonstrations are in fact not required -- randomly replacing labels in the demonstrations barely hurts performance.
We find that other aspects of the demonstrations are the key drivers of end task performance.
arXiv Detail & Related papers (2022-02-25T17:25:19Z) - Interactive Imitation Learning in State-Space [5.672132510411464]
We propose a novel Interactive Learning technique that uses human feedback in state-space to train and improve agent behavior.
Our method titled Teaching Imitative Policies in State-space(TIPS) enables providing guidance to the agent in terms of changing its state'
arXiv Detail & Related papers (2020-08-02T17:23:54Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.