Decoupling entrainment from consistency using deep neural networks
- URL: http://arxiv.org/abs/2011.01860v1
- Date: Tue, 3 Nov 2020 17:30:05 GMT
- Title: Decoupling entrainment from consistency using deep neural networks
- Authors: Andreas Weise, Rivka Levitan
- Abstract summary: Isolating the effect of consistency, i.e., speakers adhering to their individual styles, is a critical part of the analysis of entrainment.
We propose to treat speakers' initial vocal features as confounds for the prediction of subsequent outputs.
Using two existing neural approaches to deconfounding, we define new measures of entrainment that control for consistency.
- Score: 14.823143667165382
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human interlocutors tend to engage in adaptive behavior known as entrainment
to become more similar to each other. Isolating the effect of consistency,
i.e., speakers adhering to their individual styles, is a critical part of the
analysis of entrainment. We propose to treat speakers' initial vocal features
as confounds for the prediction of subsequent outputs. Using two existing
neural approaches to deconfounding, we define new measures of entrainment that
control for consistency. These successfully discriminate real interactions from
fake ones. Interestingly, our stricter methods correlate with social variables
in opposite direction from previous measures that do not account for
consistency. These results demonstrate the advantages of using neural networks
to model entrainment, and raise questions regarding how to interpret prior
associations of conversation quality with entrainment measures that do not
account for consistency.
Related papers
- A distributional simplicity bias in the learning dynamics of transformers [50.91742043564049]
We show that transformers, trained on natural language data, also display a simplicity bias.
Specifically, they sequentially learn many-body interactions among input tokens, reaching a saturation point in the prediction error for low-degree interactions.
This approach opens up the possibilities of studying how interactions of different orders in the data affect learning, in natural language processing and beyond.
arXiv Detail & Related papers (2024-10-25T15:39:34Z) - Connecting Concept Convexity and Human-Machine Alignment in Deep Neural Networks [3.001674556825579]
Understanding how neural networks align with human cognitive processes is a crucial step toward developing more interpretable and reliable AI systems.
We identify a correlation between these two dimensions that reflect the similarity relations humans in cognitive tasks.
This presents a first step toward understanding the relationship convexity between human-machine alignment.
arXiv Detail & Related papers (2024-09-10T09:32:16Z) - Continual Learning via Sequential Function-Space Variational Inference [65.96686740015902]
We propose an objective derived by formulating continual learning as sequential function-space variational inference.
Compared to objectives that directly regularize neural network predictions, the proposed objective allows for more flexible variational distributions.
We demonstrate that, across a range of task sequences, neural networks trained via sequential function-space variational inference achieve better predictive accuracy than networks trained with related methods.
arXiv Detail & Related papers (2023-12-28T18:44:32Z) - Relationship between auditory and semantic entrainment using Deep Neural
Networks (DNN) [0.0]
This study utilized state-of-the-art embeddings such as BERT and TRIpLet Loss network (TRILL) vectors to extract features for measuring semantic and auditory similarities of turns within dialogues.
We found people's tendency to entrain on semantic features more when compared to auditory features.
The findings of this study might assist in implementing the mechanism of entrainment in human-machine interaction (HMI)
arXiv Detail & Related papers (2023-12-27T14:50:09Z) - Improving Language Models Meaning Understanding and Consistency by
Learning Conceptual Roles from Dictionary [65.268245109828]
Non-human-like behaviour of contemporary pre-trained language models (PLMs) is a leading cause undermining their trustworthiness.
A striking phenomenon is the generation of inconsistent predictions, which produces contradictory results.
We propose a practical approach that alleviates the inconsistent behaviour issue by improving PLM awareness.
arXiv Detail & Related papers (2023-10-24T06:15:15Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - Shades of confusion: Lexical uncertainty modulates ad hoc coordination
in an interactive communication task [8.17947290421835]
We propose a communication task based on color-concept associations.
In Experiment 1, we establish several key properties of the mental representations of these expectations.
In Experiment 2, we examine the downstream consequences of these representations for communication.
arXiv Detail & Related papers (2021-05-13T20:42:28Z) - Interpretable Social Anchors for Human Trajectory Forecasting in Crowds [84.20437268671733]
We propose a neural network-based system to predict human trajectory in crowds.
We learn interpretable rule-based intents, and then utilise the expressibility of neural networks to model scene-specific residual.
Our architecture is tested on the interaction-centric benchmark TrajNet++.
arXiv Detail & Related papers (2021-05-07T09:22:34Z) - On the human evaluation of audio adversarial examples [1.7006003864727404]
adversarial examples are inputs intentionally perturbed to produce a wrong prediction without being noticed.
High fooling rates of proposed adversarial perturbation strategies are only valuable if the perturbations are not detectable.
We demonstrate that the metrics employed by convention are not a reliable measure of the perceptual similarity of adversarial examples in the audio domain.
arXiv Detail & Related papers (2020-01-23T10:56:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.