When to generate hedges in peer-tutoring interactions
- URL: http://arxiv.org/abs/2307.15582v1
- Date: Fri, 28 Jul 2023 14:29:19 GMT
- Title: When to generate hedges in peer-tutoring interactions
- Authors: Alafate Abulimiti, Chlo\'e Clavel, Justine Cassell
- Abstract summary: The study uses a naturalistic face-to-face dataset annotated for natural language turns, conversational strategies, tutoring strategies, and nonverbal behaviours.
Results show that embedding layers, that capture the semantic information of the previous turns, significantly improves the model's performance.
We discover that the eye gaze of both the tutor and the tutee has a significant impact on hedge prediction.
- Score: 1.0466434989449724
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper explores the application of machine learning techniques to predict
where hedging occurs in peer-tutoring interactions. The study uses a
naturalistic face-to-face dataset annotated for natural language turns,
conversational strategies, tutoring strategies, and nonverbal behaviours. These
elements are processed into a vector representation of the previous turns,
which serves as input to several machine learning models. Results show that
embedding layers, that capture the semantic information of the previous turns,
significantly improves the model's performance. Additionally, the study
provides insights into the importance of various features, such as
interpersonal rapport and nonverbal behaviours, in predicting hedges by using
Shapley values for feature explanation. We discover that the eye gaze of both
the tutor and the tutee has a significant impact on hedge prediction. We
further validate this observation through a follow-up ablation study.
Related papers
- A distributional simplicity bias in the learning dynamics of transformers [50.91742043564049]
We show that transformers, trained on natural language data, also display a simplicity bias.
Specifically, they sequentially learn many-body interactions among input tokens, reaching a saturation point in the prediction error for low-degree interactions.
This approach opens up the possibilities of studying how interactions of different orders in the data affect learning, in natural language processing and beyond.
arXiv Detail & Related papers (2024-10-25T15:39:34Z) - What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - On the Joint Interaction of Models, Data, and Features [82.60073661644435]
We introduce a new tool, the interaction tensor, for empirically analyzing the interaction between data and model through features.
Based on these observations, we propose a conceptual framework for feature learning.
Under this framework, the expected accuracy for a single hypothesis and agreement for a pair of hypotheses can both be derived in closed-form.
arXiv Detail & Related papers (2023-06-07T21:35:26Z) - Inverse Dynamics Pretraining Learns Good Representations for Multitask
Imitation [66.86987509942607]
We evaluate how such a paradigm should be done in imitation learning.
We consider a setting where the pretraining corpus consists of multitask demonstrations.
We argue that inverse dynamics modeling is well-suited to this setting.
arXiv Detail & Related papers (2023-05-26T14:40:46Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - Measuring the Impact of (Psycho-)Linguistic and Readability Features and
Their Spill Over Effects on the Prediction of Eye Movement Patterns [27.799032561722893]
We report on experiments with two eye-tracking corpora of naturalistic reading and two language models (BERT and GPT-2).
In all experiments, we test effects of a broad spectrum of features for predicting human reading behavior that fall into five categories (syntactic complexity, lexical richness, register-based multiword combinations, readability and psycholinguistic word properties).
Our experiments show that both the features included and the architecture of the transformer-based language models play a role in predicting multiple eye-tracking measures during naturalistic reading.
arXiv Detail & Related papers (2022-03-15T17:13:45Z) - An Interactive Visualization Tool for Understanding Active Learning [12.345164513513671]
We present an interactive visualization tool to elucidate the training process of active learning.
The tool enables one to select a sample of interesting data points, view how their prediction values change at different querying stages, and thus better understand when and how active learning works.
arXiv Detail & Related papers (2021-11-09T03:33:26Z) - Interpreting and improving deep-learning models with reality checks [13.287382944078562]
This chapter covers recent work aiming to interpret models by attributing importance to features and feature groups for a single prediction.
We show how these attributions can be used to directly improve the generalization of a neural network or to distill it into a simple model.
arXiv Detail & Related papers (2021-08-16T00:58:15Z) - Prototypical Representation Learning for Relation Extraction [56.501332067073065]
This paper aims to learn predictive, interpretable, and robust relation representations from distantly-labeled data.
We learn prototypes for each relation from contextual information to best explore the intrinsic semantics of relations.
Results on several relation learning tasks show that our model significantly outperforms the previous state-of-the-art relational models.
arXiv Detail & Related papers (2021-03-22T08:11:43Z) - How does this interaction affect me? Interpretable attribution for
feature interactions [19.979889568380464]
We propose an interaction attribution and detection framework called Archipelago.
Our experiments on standard annotation labels indicate our approach provides significantly more interpretable explanations than comparable methods.
We also provide accompanying visualizations of our approach that give new insights into deep neural networks.
arXiv Detail & Related papers (2020-06-19T05:14:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.