Opportunities and Challenges in Neural Dialog Tutoring
- URL: http://arxiv.org/abs/2301.09919v2
- Date: Mon, 27 Mar 2023 19:13:35 GMT
- Title: Opportunities and Challenges in Neural Dialog Tutoring
- Authors: Jakub Macina, Nico Daheim, Lingzhi Wang, Tanmay Sinha, Manu Kapur,
Iryna Gurevych, Mrinmaya Sachan
- Abstract summary: We rigorously analyze various generative language models on two dialog tutoring datasets for language learning.
We find that although current approaches can model tutoring in constrained learning scenarios, they perform poorly in less constrained scenarios.
Our human quality evaluation shows that both models and ground-truth annotations exhibit low performance in terms of equitable tutoring.
- Score: 54.07241332881601
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Designing dialog tutors has been challenging as it involves modeling the
diverse and complex pedagogical strategies employed by human tutors. Although
there have been significant recent advances in neural conversational systems
using large language models (LLMs) and growth in available dialog corpora,
dialog tutoring has largely remained unaffected by these advances. In this
paper, we rigorously analyze various generative language models on two dialog
tutoring datasets for language learning using automatic and human evaluations
to understand the new opportunities brought by these advances as well as the
challenges we must overcome to build models that would be usable in real
educational settings. We find that although current approaches can model
tutoring in constrained learning scenarios when the number of concepts to be
taught and possible teacher strategies are small, they perform poorly in less
constrained scenarios. Our human quality evaluation shows that both models and
ground-truth annotations exhibit low performance in terms of equitable
tutoring, which measures learning opportunities for students and how engaging
the dialog is. To understand the behavior of our models in a real tutoring
setting, we conduct a user study using expert annotators and find a
significantly large number of model reasoning errors in 45% of conversations.
Finally, we connect our findings to outline future work.
Related papers
- Exploring Knowledge Tracing in Tutor-Student Dialogues [53.52699766206808]
We present a first attempt at performing knowledge tracing (KT) in tutor-student dialogues.
We propose methods to identify the knowledge components/skills involved in each dialogue turn.
We then apply a range of KT methods on the resulting labeled data to track student knowledge levels over an entire dialogue.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - Large Language Model based Situational Dialogues for Second Language Learning [7.450328495455734]
In second language learning, scenario-based conversation practice is important for language learners to achieve fluency in speaking.
To bridge this gap, we propose situational dialogue models for students to engage in conversational practice.
Our situational dialogue models are fine-tuned on large language models (LLMs), with the aim of combining the engaging nature of an open-ended conversation with the focused practice of scenario-based tasks.
arXiv Detail & Related papers (2024-03-29T06:43:55Z) - FutureTOD: Teaching Future Knowledge to Pre-trained Language Model for
Task-Oriented Dialogue [20.79359173822053]
We propose a novel dialogue pre-training model, FutureTOD, which distills future knowledge to the representation of the previous dialogue context.
Our intuition is that a good dialogue representation both learns local context information and predicts future information.
arXiv Detail & Related papers (2023-06-17T10:40:07Z) - MathDial: A Dialogue Tutoring Dataset with Rich Pedagogical Properties
Grounded in Math Reasoning Problems [74.73881579517055]
We propose a framework to generate such dialogues by pairing human teachers with a Large Language Model prompted to represent common student errors.
We describe how we use this framework to collect MathDial, a dataset of 3k one-to-one teacher-student tutoring dialogues.
arXiv Detail & Related papers (2023-05-23T21:44:56Z) - Few-Shot Structured Policy Learning for Multi-Domain and Multi-Task
Dialogues [0.716879432974126]
Graph neural networks (GNNs) show a remarkable superiority by reaching a success rate above 80% with only 50 dialogues, when learning from simulated experts.
We suggest to concentrate future research efforts on bridging the gap between human data, simulators and automatic evaluators in dialogue frameworks.
arXiv Detail & Related papers (2023-02-22T08:18:49Z) - Stabilized In-Context Learning with Pre-trained Language Models for Few
Shot Dialogue State Tracking [57.92608483099916]
Large pre-trained language models (PLMs) have shown impressive unaided performance across many NLP tasks.
For more complex tasks such as dialogue state tracking (DST), designing prompts that reliably convey the desired intent is nontrivial.
We introduce a saliency model to limit dialogue text length, allowing us to include more exemplars per query.
arXiv Detail & Related papers (2023-02-12T15:05:10Z) - Improving mathematical questioning in teacher training [1.794107419334178]
High-fidelity, AI-based simulated classroom systems enable teachers to rehearse effective teaching strategies.
This paper builds a text-based interactive conversational agent to help teachers practice mathematical questioning skills.
arXiv Detail & Related papers (2021-12-02T05:33:03Z) - Advances in Multi-turn Dialogue Comprehension: A Survey [51.215629336320305]
We review the previous methods from the perspective of dialogue modeling.
We discuss three typical patterns of dialogue modeling that are widely-used in dialogue comprehension tasks.
arXiv Detail & Related papers (2021-03-04T15:50:17Z) - Learning from Easy to Complex: Adaptive Multi-curricula Learning for
Neural Dialogue Generation [40.49175137775255]
Current state-of-the-art neural dialogue systems are mainly data-driven and are trained on human-generated responses.
We propose an adaptive multi-curricula learning framework to schedule a committee of the organized curricula.
arXiv Detail & Related papers (2020-03-02T03:09:28Z) - Low-Resource Knowledge-Grounded Dialogue Generation [74.09352261943913]
We consider knowledge-grounded dialogue generation under a natural assumption that only limited training examples are available.
We devise a disentangled response decoder in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model.
With only 1/8 training data, our model can achieve the state-of-the-art performance and generalize well on out-of-domain knowledge.
arXiv Detail & Related papers (2020-02-24T16:20:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.