The truth is no diaper: Human and AI-generated associations to emotional words
- URL: http://arxiv.org/abs/2511.04077v1
- Date: Thu, 06 Nov 2025 05:32:04 GMT
- Title: The truth is no diaper: Human and AI-generated associations to emotional words
- Authors: Špela Vintar, Jan Jona Javoršek,
- Abstract summary: We compare the associative behaviour of humans compared to large language models.<n>We explore associations to emotionally loaded words and try to determine whether large language models generate associations in a similar way to humans.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human word associations are a well-known method of gaining insight into the internal mental lexicon, but the responses spontaneously offered by human participants to word cues are not always predictable as they may be influenced by personal experience, emotions or individual cognitive styles. The ability to form associative links between seemingly unrelated concepts can be the driving mechanisms of creativity. We perform a comparison of the associative behaviour of humans compared to large language models. More specifically, we explore associations to emotionally loaded words and try to determine whether large language models generate associations in a similar way to humans. We find that the overlap between humans and LLMs is moderate, but also that the associations of LLMs tend to amplify the underlying emotional load of the stimulus, and that they tend to be more predictable and less creative than human ones.
Related papers
- A Unified Spoken Language Model with Injected Emotional-Attribution Thinking for Human-like Interaction [50.05919688888947]
This paper presents a unified spoken language model for emotional intelligence, enhanced by a novel data construction strategy termed Injected Emotional-Attribution Thinking (IEAT)<n>IEAT incorporates user emotional states and their underlying causes into the model's internal reasoning process, enabling emotion-aware reasoning to be internalized rather than treated as explicit supervision.<n> Experiments on the Human-like Spoken Dialogue Systems Challenge (HumDial) Emotional Intelligence benchmark demonstrate that the proposed approach achieves top-ranked performance across emotional trajectory modeling, emotional reasoning, and empathetic response generation.
arXiv Detail & Related papers (2026-01-08T14:07:30Z) - Large Language Models are Highly Aligned with Human Ratings of Emotional Stimuli [0.62914438169038]
Emotions exert an immense influence over human behavior and cognition in both commonplace and high-stress tasks.<n>Discussions should be informed by an understanding of how large language models evaluate emotionally loaded stimuli or situations.<n>A model's alignment with human behavior in these cases can inform the effectiveness of LLMs for certain roles or interactions.
arXiv Detail & Related papers (2025-08-19T19:22:00Z) - Heartificial Intelligence: Exploring Empathy in Language Models [8.517406772939292]
Small and large language models consistently outperformed humans on cognitive empathy tasks.<n>Despite their cognitive strengths, both small and large language models showed significantly lower affective empathy compared to human participants.
arXiv Detail & Related papers (2025-07-30T14:09:33Z) - Emergence of Hierarchical Emotion Organization in Large Language Models [25.806354070542678]
We find that large language models (LLMs) naturally form hierarchical emotion trees that align with human psychological models.<n>We also uncover systematic biases in emotion recognition across socioeconomic personas, with compounding misclassifications for intersectional, underrepresented groups.<n>Our results hint at the potential of using cognitively-grounded theories for developing better model evaluations.
arXiv Detail & Related papers (2025-07-12T15:12:46Z) - RLVER: Reinforcement Learning with Verifiable Emotion Rewards for Empathetic Agents [67.46032287312339]
Large language models (LLMs) excel at logical and algorithmic reasoning, yet their emotional intelligence (EQ) still lags far behind their cognitive prowess.<n>We introduce RLVER, the first end-to-end reinforcement learning framework that leverages verifiable emotion rewards from simulated users.<n>Our results show that RLVER is a practical route toward emotionally intelligent and broadly capable language agents.
arXiv Detail & Related papers (2025-07-03T18:33:18Z) - AI shares emotion with humans across languages and cultures [12.530921452568291]
We assess human-AI emotional alignment across linguistic-cultural groups and model-families.<n>Our analyses reveal that LLM-derived emotion spaces are structurally congruent with human perception.<n>We show that model expressions can be stably and naturally modulated across distinct emotion categories.
arXiv Detail & Related papers (2025-06-11T14:42:30Z) - Shaping Shared Languages: Human and Large Language Models' Inductive Biases in Emergent Communication [0.09999629695552195]
We investigate how artificial languages evolve when optimised for inductive biases in humans and large language models (LLMs)<n>We show that referentially grounded vocabularies emerge that enable reliable communication in all conditions, even when humans collaborate.
arXiv Detail & Related papers (2025-03-06T12:47:54Z) - How Deep is Love in LLMs' Hearts? Exploring Semantic Size in Human-like Cognition [75.11808682808065]
This study investigates whether large language models (LLMs) exhibit similar tendencies in understanding semantic size.<n>Our findings reveal that multi-modal training is crucial for LLMs to achieve more human-like understanding.<n> Lastly, we examine whether LLMs are influenced by attention-grabbing headlines with larger semantic sizes in a real-world web shopping scenario.
arXiv Detail & Related papers (2025-03-01T03:35:56Z) - Human-like conceptual representations emerge from language prediction [72.5875173689788]
Large language models (LLMs) trained exclusively through next-token prediction over language data exhibit remarkably human-like behaviors.<n>Are these models developing concepts akin to humans, and if so, how are such concepts represented and organized?<n>Our results demonstrate that LLMs can flexibly derive concepts from linguistic descriptions in relation to contextual cues about other concepts.<n>These findings establish that structured, human-like conceptual representations can naturally emerge from language prediction without real-world grounding.
arXiv Detail & Related papers (2025-01-21T23:54:17Z) - Measuring Psychological Depth in Language Models [50.48914935872879]
We introduce the Psychological Depth Scale (PDS), a novel framework rooted in literary theory that measures an LLM's ability to produce authentic and narratively complex stories.
We empirically validate our framework by showing that humans can consistently evaluate stories based on PDS (0.72 Krippendorff's alpha)
Surprisingly, GPT-4 stories either surpassed or were statistically indistinguishable from highly-rated human-written stories sourced from Reddit.
arXiv Detail & Related papers (2024-06-18T14:51:54Z) - Perspective-taking and Pragmatics for Generating Empathetic Responses
Focused on Emotion Causes [50.569762345799354]
We argue that two issues must be tackled at the same time: (i) identifying which word is the cause for the other's emotion from his or her utterance and (ii) reflecting those specific words in the response generation.
Taking inspiration from social cognition, we leverage a generative estimator to infer emotion cause words from utterances with no word-level label.
arXiv Detail & Related papers (2021-09-18T04:22:49Z) - Disambiguating Affective Stimulus Associations for Robot Perception and
Dialogue [67.89143112645556]
We provide a NICO robot with the ability to learn the associations between a perceived auditory stimulus and an emotional expression.
NICO is able to do this for both individual subjects and specific stimuli, with the aid of an emotion-driven dialogue system.
The robot is then able to use this information to determine a subject's enjoyment of perceived auditory stimuli in a real HRI scenario.
arXiv Detail & Related papers (2021-03-05T20:55:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.