"You might think about slightly revising the title": identifying hedges
in peer-tutoring interactions
- URL: http://arxiv.org/abs/2306.14911v1
- Date: Sun, 18 Jun 2023 12:47:54 GMT
- Title: "You might think about slightly revising the title": identifying hedges
in peer-tutoring interactions
- Authors: Yann Raphalen, Chlo\'e Clavel, Justine Cassell
- Abstract summary: Hedges play an important role in the management of conversational interaction.
We use a multimodal peer-tutoring dataset to construct a computational framework for identifying hedges.
We employ a model explainability tool to explore the features that characterize hedges in peer-tutoring conversations.
- Score: 1.0466434989449724
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hedges play an important role in the management of conversational
interaction. In peer tutoring, they are notably used by tutors in dyads (pairs
of interlocutors) experiencing low rapport to tone down the impact of
instructions and negative feedback. Pursuing the objective of building a
tutoring agent that manages rapport with students in order to improve learning,
we used a multimodal peer-tutoring dataset to construct a computational
framework for identifying hedges. We compared approaches relying on pre-trained
resources with others that integrate insights from the social science
literature. Our best performance involved a hybrid approach that outperforms
the existing baseline while being easier to interpret. We employ a model
explainability tool to explore the features that characterize hedges in
peer-tutoring conversations, and we identify some novel features, and the
benefits of such a hybrid model approach.
Related papers
- Prosody as a Teaching Signal for Agent Learning: Exploratory Studies and Algorithmic Implications [2.8243597585456017]
This paper advocates for the integration of prosody as a teaching signal to enhance agent learning from human teachers.
Our findings suggest that prosodic features, when coupled with explicit feedback, can enhance reinforcement learning outcomes.
arXiv Detail & Related papers (2024-10-31T01:51:23Z) - Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - When to generate hedges in peer-tutoring interactions [1.0466434989449724]
The study uses a naturalistic face-to-face dataset annotated for natural language turns, conversational strategies, tutoring strategies, and nonverbal behaviours.
Results show that embedding layers, that capture the semantic information of the previous turns, significantly improves the model's performance.
We discover that the eye gaze of both the tutor and the tutee has a significant impact on hedge prediction.
arXiv Detail & Related papers (2023-07-28T14:29:19Z) - Opportunities and Challenges in Neural Dialog Tutoring [54.07241332881601]
We rigorously analyze various generative language models on two dialog tutoring datasets for language learning.
We find that although current approaches can model tutoring in constrained learning scenarios, they perform poorly in less constrained scenarios.
Our human quality evaluation shows that both models and ground-truth annotations exhibit low performance in terms of equitable tutoring.
arXiv Detail & Related papers (2023-01-24T11:00:17Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Semantic Interactive Learning for Text Classification: A Constructive
Approach for Contextual Interactions [0.0]
We propose a novel interaction framework called Semantic Interactive Learning for the text domain.
We frame the problem of incorporating constructive and contextual feedback into the learner as a task to find an architecture that enables more semantic alignment between humans and machines.
We introduce a technique called SemanticPush that is effective for translating conceptual corrections of humans to non-extrapolating training examples.
arXiv Detail & Related papers (2022-09-07T08:13:45Z) - Utterance Rewriting with Contrastive Learning in Multi-turn Dialogue [22.103162555263143]
We introduce contrastive learning and multi-task learning to jointly model the problem.
Our proposed model achieves state-of-the-art performance on several public datasets.
arXiv Detail & Related papers (2022-03-22T10:13:27Z) - Re-entry Prediction for Online Conversations via Self-Supervised
Learning [25.488783376789026]
We propose three auxiliary tasks, namely, Spread Pattern, Repeated Target user, and Turn Authorship, as the self-supervised signals for re-entry prediction.
Experimental results on two datasets newly collected from Twitter and Reddit show that our method outperforms the previous state-of-the-arts.
arXiv Detail & Related papers (2021-09-05T08:07:52Z) - Probing Task-Oriented Dialogue Representation from Language Models [106.02947285212132]
This paper investigates pre-trained language models to find out which model intrinsically carries the most informative representation for task-oriented dialogue tasks.
We fine-tune a feed-forward layer as the classifier probe on top of a fixed pre-trained language model with annotated labels in a supervised way.
arXiv Detail & Related papers (2020-10-26T21:34:39Z) - Enhancing Dialogue Generation via Multi-Level Contrastive Learning [57.005432249952406]
We propose a multi-level contrastive learning paradigm to model the fine-grained quality of the responses with respect to the query.
A Rank-aware (RC) network is designed to construct the multi-level contrastive optimization objectives.
We build a Knowledge Inference (KI) component to capture the keyword knowledge from the reference during training and exploit such information to encourage the generation of informative words.
arXiv Detail & Related papers (2020-09-19T02:41:04Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.