The exception of humour: Iconicity, Phonemic Surprisal, Memory Recall, and Emotional Associations
- URL: http://arxiv.org/abs/2502.01682v1
- Date: Sun, 02 Feb 2025 05:31:46 GMT
- Title: The exception of humour: Iconicity, Phonemic Surprisal, Memory Recall, and Emotional Associations
- Authors: Alexander Kilpatrick, Maria Flaksman,
- Abstract summary: This study explores the relationships between humor, phonemic bigram surprisal, emotional valence, and memory recall.<n>Words with negative associations often exhibit greater surprisal and are easier to recall.<n>While associated with positive emotions, humorous words also display heightened surprisal and enhanced memorability.
- Score: 50.59120569845975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This meta-study explores the relationships between humor, phonemic bigram surprisal, emotional valence, and memory recall. Prior research indicates that words with higher phonemic surprisal are more readily remembered, suggesting that unpredictable phoneme sequences promote long-term memory recall. Emotional valence is another well-documented factor influencing memory, with negative experiences and stimuli typically being remembered more easily than positive ones. Building on existing findings, this study highlights that words with negative associations often exhibit greater surprisal and are easier to recall. Humor, however, presents an exception: while associated with positive emotions, humorous words also display heightened surprisal and enhanced memorability.
Related papers
- The Emotion-Memory Link: Do Memorability Annotations Matter for Intelligent Systems? [1.960641679592198]
We investigate the relationship between perceived group emotions (Pleasure-Arousal) and group memorability in the context of conversational interactions.<n>Our results show that the observed relationship between affect and memorability annotations cannot be reliably distinguished from what might be expected under random chance.
arXiv Detail & Related papers (2025-07-18T17:06:34Z) - Disentangle Identity, Cooperate Emotion: Correlation-Aware Emotional Talking Portrait Generation [63.94836524433559]
DICE-Talk is a framework for disentangling identity with emotion and cooperating emotions with similar characteristics.
We develop a disentangled emotion embedder that jointly models audio-visual emotional cues through cross-modal attention.
Second, we introduce a correlation-enhanced emotion conditioning module with learnable Emotion Banks.
Third, we design an emotion discrimination objective that enforces affective consistency during the diffusion process.
arXiv Detail & Related papers (2025-04-25T05:28:21Z) - Analysis of Emotion in Rumour Threads on Social Media [47.30745945629545]
We focus on the interface between emotion and rumours in threaded discourses, building on the surprisingly sparse literature on the topic.
We provide a comprehensive analytical emotion framework, contrasting rumour and non-rumour cases using existing NLP datasets.
Our framework reveals several findings: rumours exhibit more negative sentiment and emotions, including anger, fear and pessimism, while non-rumours evoke more positive emotions.
arXiv Detail & Related papers (2025-02-23T12:57:40Z) - Decoding Emotion: Speech Perception Patterns in Individuals with Self-reported Depression [3.5047438945401717]
This study examines the relationship between self-reported depression and the perception of affective speech within the Indian population.<n>No significant differences between the depression and no-depression groups were observed for any of the emotional stimuli.<n>Significantly higher PANAS scores by the depression than the no-depression group indicate the impact of pre-disposed mood on the current mood status.
arXiv Detail & Related papers (2024-12-28T16:54:25Z) - MADial-Bench: Towards Real-world Evaluation of Memory-Augmented Dialogue Generation [15.64077949677469]
We present a novel Memory-Augmented Dialogue Benchmark (MADail-Bench) to evaluate the effectiveness of memory-augmented dialogue systems (MADS)
The benchmark assesses two tasks separately: memory retrieval and memory recognition with the incorporation of both passive and proactive memory recall data.
Results from cutting-edge embedding models and large language models on this benchmark indicate the potential for further advancement.
arXiv Detail & Related papers (2024-09-23T17:38:41Z) - The MuSe 2022 Multimodal Sentiment Analysis Challenge: Humor, Emotional
Reactions, and Stress [71.06453250061489]
The Multimodal Sentiment Analysis Challenge (MuSe) 2022 is dedicated to multimodal sentiment and emotion recognition.
For this year's challenge, we feature three datasets: (i) the Passau Spontaneous Football Coach Humor dataset that contains audio-visual recordings of German football coaches, labelled for the presence of humour; (ii) the Hume-Reaction dataset in which reactions of individuals to emotional stimuli have been annotated with respect to seven emotional expression intensities; and (iii) the Ulm-Trier Social Stress Test dataset comprising of audio-visual data labelled with continuous emotion values of people in stressful dispositions.
arXiv Detail & Related papers (2022-06-23T13:34:33Z) - LaMemo: Language Modeling with Look-Ahead Memory [50.6248714811912]
We propose Look-Ahead Memory (LaMemo) that enhances the recurrence memory by incrementally attending to the right-side tokens.
LaMemo embraces bi-directional attention and segment recurrence with an additional overhead only linearly proportional to the memory length.
Experiments on widely used language modeling benchmarks demonstrate its superiority over the baselines equipped with different types of memory.
arXiv Detail & Related papers (2022-04-15T06:11:25Z) - "splink" is happy and "phrouth" is scary: Emotion Intensity Analysis for
Nonsense Words [15.425333719115262]
We conduct a best-worst scaling crowdsourcing study in which participants assign intensity scores for joy, sadness, anger, disgust, fear, and surprise to 272 non-sense words.
We develop character-level and phonology-based intensity regressors and evaluate them on real and nonsense words.
The data analysis reveals that some phonetic patterns show clear differences between emotion intensities.
arXiv Detail & Related papers (2022-02-24T14:48:43Z) - Emotion Intensity and its Control for Emotional Voice Conversion [77.05097999561298]
Emotional voice conversion (EVC) seeks to convert the emotional state of an utterance while preserving the linguistic content and speaker identity.
In this paper, we aim to explicitly characterize and control the intensity of emotion.
We propose to disentangle the speaker style from linguistic content and encode the speaker style into a style embedding in a continuous space that forms the prototype of emotion embedding.
arXiv Detail & Related papers (2022-01-10T02:11:25Z) - Computational Lens on Cognition: Study Of Autobiographical Versus
Imagined Stories With Large-Scale Language Models [95.88620740809004]
We study differences in the narrative flow of events in autobiographical versus imagined stories using GPT-3.
We found that imagined stories have higher sequentiality than autobiographical stories.
In comparison to imagined stories, autobiographical stories contain more concrete words and words related to the first person.
arXiv Detail & Related papers (2022-01-07T20:10:47Z) - Shared memories driven by the intrinsic memorability of items [0.0]
Recent work has revealed a strong sway of the visual world itself in influencing what we remember and forget.
Research has revealed that the brain is sensitive to memorability both rapidly and automatically during late perception.
arXiv Detail & Related papers (2021-04-14T16:03:27Z) - Assessment of Unconsciousness for Memory Consolidation Using EEG Signals [20.486281623777774]
We assess the unconsciousness in terms of memory consolidation using electroencephalogram signals.
spindle power in central, parietal, occipital regions during unconsciousness was positively correlated with the performance of location memory.
There was also a negative correlation between delta connectivity and word-pairs memory, alpha connectivity and location memory, and spindle connectivity and word-pairs memory.
arXiv Detail & Related papers (2020-05-15T06:49:42Z) - Self-Attentive Associative Memory [69.40038844695917]
We propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory)
We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks.
arXiv Detail & Related papers (2020-02-10T03:27:48Z) - Detecting Emotion Primitives from Speech and their use in discerning
Categorical Emotions [16.886826928295203]
Emotion plays an essential role in human-to-human communication, enabling us to convey feelings such as happiness, frustration, and sincerity.
This work investigated how emotion primitives can be used to detect categorical emotions such as happiness, disgust, contempt, anger, and surprise from neutral speech.
Results indicated that arousal, followed by dominance was a better detector of such emotions.
arXiv Detail & Related papers (2020-01-31T03:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.