What Would a Teacher Do? Predicting Future Talk Moves
- URL: http://arxiv.org/abs/2106.05249v1
- Date: Wed, 9 Jun 2021 17:45:16 GMT
- Title: What Would a Teacher Do? Predicting Future Talk Moves
- Authors: Ananya Ganesh, Martha Palmer, and Katharina Kann
- Abstract summary: We introduce a new task, called future talk move prediction (FTMP)
It consists of predicting the next talk move given a conversation history with its corresponding talk moves.
We introduce a neural network model for this task, which outperforms multiple baselines by a large margin.
- Score: 19.952531500315757
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent advances in natural language processing (NLP) have the ability to
transform how classroom learning takes place. Combined with the increasing
integration of technology in today's classrooms, NLP systems leveraging
question answering and dialog processing techniques can serve as private tutors
or participants in classroom discussions to increase student engagement and
learning. To progress towards this goal, we use the classroom discourse
framework of academically productive talk (APT) to learn strategies that make
for the best learning experience. In this paper, we introduce a new task,
called future talk move prediction (FTMP): it consists of predicting the next
talk move -- an utterance strategy from APT -- given a conversation history
with its corresponding talk moves. We further introduce a neural network model
for this task, which outperforms multiple baselines by a large margin. Finally,
we compare our model's performance on FTMP to human performance and show
several similarities between the two.
Related papers
- SpeechPrompt: Prompting Speech Language Models for Speech Processing Tasks [94.10497337235083]
We are first to explore the potential of prompting speech LMs in the domain of speech processing.
We reformulate speech processing tasks into speech-to-unit generation tasks.
We show that the prompting method can achieve competitive performance compared to the strong fine-tuning method.
arXiv Detail & Related papers (2024-08-23T13:00:10Z) - Towards Understanding Counseling Conversations: Domain Knowledge and
Large Language Models [22.588557390720236]
This paper proposes a systematic approach to examine the efficacy of domain knowledge and large language models (LLMs) in better representing counseling conversations.
We empirically show that state-of-the-art language models such as Transformer-based models and GPT models fail to predict the conversation outcome.
arXiv Detail & Related papers (2024-02-22T01:02:37Z) - Large Language Model-Driven Classroom Flipping: Empowering
Student-Centric Peer Questioning with Flipped Interaction [3.1473798197405953]
This paper investigates a pedagogical approach of classroom flipping based on flipped interaction in large language models.
Flipped interaction involves using language models to prioritize generating questions instead of answers to prompts.
We propose a workflow to integrate prompt engineering with clicker and JiTT quizzes by a poll-prompt-quiz routine and a quiz-prompt-discuss routine.
arXiv Detail & Related papers (2023-11-14T15:48:19Z) - UKP-SQuARE: An Interactive Tool for Teaching Question Answering [61.93372227117229]
The exponential growth of question answering (QA) has made it an indispensable topic in any Natural Language Processing (NLP) course.
We introduce UKP-SQuARE as a platform for QA education.
Students can run, compare, and analyze various QA models from different perspectives.
arXiv Detail & Related papers (2023-05-31T11:29:04Z) - Channel-aware Decoupling Network for Multi-turn Dialogue Comprehension [81.47133615169203]
We propose compositional learning for holistic interaction across utterances beyond the sequential contextualization from PrLMs.
We employ domain-adaptive training strategies to help the model adapt to the dialogue domains.
Experimental results show that our method substantially boosts the strong PrLM baselines in four public benchmark datasets.
arXiv Detail & Related papers (2023-01-10T13:18:25Z) - An Exploration of Prompt Tuning on Generative Spoken Language Model for
Speech Processing Tasks [112.1942546460814]
We report the first exploration of the prompt tuning paradigm for speech processing tasks based on Generative Spoken Language Model (GSLM)
Experiment results show that the prompt tuning technique achieves competitive performance in speech classification tasks with fewer trainable parameters than fine-tuning specialized downstream models.
arXiv Detail & Related papers (2022-03-31T03:26:55Z) - Few-Shot Bot: Prompt-Based Learning for Dialogue Systems [58.27337673451943]
Learning to converse using only a few examples is a great challenge in conversational AI.
The current best conversational models are either good chit-chatters (e.g., BlenderBot) or goal-oriented systems (e.g., MinTL)
We propose prompt-based few-shot learning which does not require gradient-based fine-tuning but instead uses a few examples as the only source of learning.
arXiv Detail & Related papers (2021-10-15T14:36:45Z) - Advances in Multi-turn Dialogue Comprehension: A Survey [51.215629336320305]
Training machines to understand natural language and interact with humans is an elusive and essential task of artificial intelligence.
This paper reviews the previous methods from the technical perspective of dialogue modeling for the dialogue comprehension task.
In addition, we categorize dialogue-related pre-training techniques which are employed to enhance PrLMs in dialogue scenarios.
arXiv Detail & Related papers (2021-10-11T03:52:37Z) - Using Transformers to Provide Teachers with Personalized Feedback on
their Classroom Discourse: The TalkMoves Application [14.851607363136978]
We describe the TalkMoves application's cloud-based infrastructure for managing and processing classroom recordings.
We discuss several technical challenges that need to be addressed when working with real-world speech and language data from noisy K-12 classrooms.
arXiv Detail & Related papers (2021-04-29T20:45:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.