The More Similar, the Better? Associations between Latent Semantic Similarity and Emotional Experiences Differ across Conversation Contexts
- URL: http://arxiv.org/abs/2309.12646v3
- Date: Mon, 26 May 2025 14:22:15 GMT
- Title: The More Similar, the Better? Associations between Latent Semantic Similarity and Emotional Experiences Differ across Conversation Contexts
- Authors: Chen-Wei Yu, Yun-Shiuan Chuang, Alexandros N. Lotsos, Tabea Meier, Claudia M. Haase,
- Abstract summary: Latent semantic similarity (LSS) is a measure of the similarity of information exchanges in a conversation.<n>This work highlights the importance of context in understanding the emotional correlates of LSS.
- Score: 39.58317527488534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Latent semantic similarity (LSS) is a measure of the similarity of information exchanges in a conversation. Challenging the assumption that higher LSS bears more positive psychological meaning, we propose that this association might depend on the type of conversation people have. On the one hand, the share-mind perspective would predict that higher LSS should be associated with more positive emotional experiences across the board. The broaden-and-build theory, on the other hand, would predict that higher LSS should be inversely associated with more positive emotional experiences specifically in pleasant conversations. Linear mixed modeling based on conversations among 50 long-term married couples supported the latter prediction. That is, partners experienced greater positive emotions when their overall information exchanges were more dissimilar in pleasant (but not conflict) conversations. This work highlights the importance of context in understanding the emotional correlates of LSS and exemplifies how modern natural language processing tools can be used to evaluate competing theory-driven hypotheses in social psychology.
Related papers
- Emergence of Hierarchical Emotion Organization in Large Language Models [25.806354070542678]
We find that large language models (LLMs) naturally form hierarchical emotion trees that align with human psychological models.<n>We also uncover systematic biases in emotion recognition across socioeconomic personas, with compounding misclassifications for intersectional, underrepresented groups.<n>Our results hint at the potential of using cognitively-grounded theories for developing better model evaluations.
arXiv Detail & Related papers (2025-07-12T15:12:46Z) - Sentient Agent as a Judge: Evaluating Higher-Order Social Cognition in Large Language Models [75.85319609088354]
Sentient Agent as a Judge (SAGE) is an evaluation framework for large language models.<n>SAGE instantiates a Sentient Agent that simulates human-like emotional changes and inner thoughts during interaction.<n>SAGE provides a principled, scalable and interpretable tool for tracking progress toward genuinely empathetic and socially adept language agents.
arXiv Detail & Related papers (2025-05-01T19:06:10Z) - Large Language Models Often Say One Thing and Do Another [49.22262396351797]
We develop a novel evaluation benchmark called the Words and Deeds Consistency Test (WDCT)
The benchmark establishes a strict correspondence between word-based and deed-based questions across different domains.
The evaluation results reveal a widespread inconsistency between words and deeds across different LLMs and domains.
arXiv Detail & Related papers (2025-03-10T07:34:54Z) - Bridging Context Gaps: Enhancing Comprehension in Long-Form Social Conversations Through Contextualized Excerpts [17.33980126041374]
We focus on enhancing comprehension in small-group recorded conversations.<n>We show approaches for effective contextualization to improve comprehension, readability, and empathy.<n>We release the Human-annotated Salient Excerpts dataset to support future work.
arXiv Detail & Related papers (2024-12-28T01:29:53Z) - Personality Differences Drive Conversational Dynamics: A High-Dimensional NLP Approach [1.9336815376402723]
We map the trajectories of $N = 1655$ conversations between strangers into a high-dimensional space.
Our findings suggest that interlocutors with a larger difference in the personality dimension of openness influence each other to spend more time discussing a wider range of topics.
We also examine how participants' affect (emotion) changes from before to after a conversation, finding that a larger difference in extraversion predicts a larger difference in affect change.
arXiv Detail & Related papers (2024-10-14T19:48:31Z) - Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Rel-A.I.: An Interaction-Centered Approach To Measuring Human-LM Reliance [73.19687314438133]
We study how reliance is affected by contextual features of an interaction.
We find that contextual characteristics significantly affect human reliance behavior.
Our results show that calibration and language quality alone are insufficient in evaluating the risks of human-LM interactions.
arXiv Detail & Related papers (2024-07-10T18:00:05Z) - Measuring Psychological Depth in Language Models [50.48914935872879]
We introduce the Psychological Depth Scale (PDS), a novel framework rooted in literary theory that measures an LLM's ability to produce authentic and narratively complex stories.
We empirically validate our framework by showing that humans can consistently evaluate stories based on PDS (0.72 Krippendorff's alpha)
Surprisingly, GPT-4 stories either surpassed or were statistically indistinguishable from highly-rated human-written stories sourced from Reddit.
arXiv Detail & Related papers (2024-06-18T14:51:54Z) - From a Social Cognitive Perspective: Context-aware Visual Social Relationship Recognition [59.57095498284501]
We propose a novel approach that recognizes textbfContextual textbfSocial textbfRelationships (textbfConSoR) from a social cognitive perspective.
We construct social-aware descriptive language prompts with social relationships for each image.
Impressively, ConSoR outperforms previous methods with a 12.2% gain on the People-in-Social-Context (PISC) dataset and a 9.8% increase on the People-in-Photo-Album (PIPA) benchmark.
arXiv Detail & Related papers (2024-06-12T16:02:28Z) - AntEval: Evaluation of Social Interaction Competencies in LLM-Driven
Agents [65.16893197330589]
Large Language Models (LLMs) have demonstrated their ability to replicate human behaviors across a wide range of scenarios.
However, their capability in handling complex, multi-character social interactions has yet to be fully explored.
We introduce the Multi-Agent Interaction Evaluation Framework (AntEval), encompassing a novel interaction framework and evaluation methods.
arXiv Detail & Related papers (2024-01-12T11:18:00Z) - Relationship between auditory and semantic entrainment using Deep Neural
Networks (DNN) [0.0]
This study utilized state-of-the-art embeddings such as BERT and TRIpLet Loss network (TRILL) vectors to extract features for measuring semantic and auditory similarities of turns within dialogues.
We found people's tendency to entrain on semantic features more when compared to auditory features.
The findings of this study might assist in implementing the mechanism of entrainment in human-machine interaction (HMI)
arXiv Detail & Related papers (2023-12-27T14:50:09Z) - ChatGPT Evaluation on Sentence Level Relations: A Focus on Temporal,
Causal, and Discourse Relations [52.26802326949116]
We quantitatively evaluate the performance of ChatGPT, an interactive large language model, on inter-sentential relations.
ChatGPT exhibits exceptional proficiency in detecting and reasoning about causal relations.
It is capable of identifying the majority of discourse relations with existing explicit discourse connectives, but the implicit discourse relation remains a formidable challenge.
arXiv Detail & Related papers (2023-04-28T13:14:36Z) - Learning to Memorize Entailment and Discourse Relations for
Persona-Consistent Dialogues [8.652711997920463]
Existing works have improved the performance of dialogue systems by intentionally learning interlocutor personas with sophisticated network structures.
This study proposes a method of learning to memorize entailment and discourse relations for persona-consistent dialogue tasks.
arXiv Detail & Related papers (2023-01-12T08:37:00Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Understanding How People Rate Their Conversations [73.17730062864314]
We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
arXiv Detail & Related papers (2022-06-01T00:45:32Z) - Conversational Analysis of Daily Dialog Data using Polite Emotional
Dialogue Acts [15.224826239931813]
This article presents findings of polite emotional dialogue act associations.
We confirm our hypothesis that the utterances with the emotion classes Anger and Disgust are more likely to be impolite.
A less expectable phenomenon occurs with dialogue acts Inform and Commissive which contain more polite utterances than Question and Directive.
arXiv Detail & Related papers (2022-05-05T21:03:47Z) - A Neural Network-Based Linguistic Similarity Measure for Entrainment in
Conversations [12.052672647509732]
Linguistic entrainment is a phenomenon where people tend to mimic each other in conversation.
Most of the current similarity measures are based on bag-of-words approaches.
We propose to use a neural network model to perform the similarity measure for entrainment.
arXiv Detail & Related papers (2021-09-04T19:48:17Z) - Who Responded to Whom: The Joint Effects of Latent Topics and Discourse
in Conversation Structure [53.77234444565652]
We identify the responding relations in the conversation discourse, which link response utterances to their initiations.
We propose a model to learn latent topics and discourse in word distributions, and predict pairwise initiation-response links.
Experimental results on both English and Chinese conversations show that our model significantly outperforms the previous state of the arts.
arXiv Detail & Related papers (2021-04-17T17:46:00Z) - Decoupling entrainment from consistency using deep neural networks [14.823143667165382]
Isolating the effect of consistency, i.e., speakers adhering to their individual styles, is a critical part of the analysis of entrainment.
We propose to treat speakers' initial vocal features as confounds for the prediction of subsequent outputs.
Using two existing neural approaches to deconfounding, we define new measures of entrainment that control for consistency.
arXiv Detail & Related papers (2020-11-03T17:30:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.