Features of Perceived Metaphoricity on the Discourse Level: Abstractness
and Emotionality
- URL: http://arxiv.org/abs/2205.08939v1
- Date: Wed, 18 May 2022 14:09:10 GMT
- Title: Features of Perceived Metaphoricity on the Discourse Level: Abstractness
and Emotionality
- Authors: Prisca Piccirilli and Sabine Schulte im Walde
- Abstract summary: Research on metaphorical language has shown ties between abstractness and emotionality with regard to metaphoricity.
This paper explores which textual and perceptual features human annotators perceive as important for the metaphoricity of discourse.
- Score: 13.622570558506265
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Research on metaphorical language has shown ties between abstractness and
emotionality with regard to metaphoricity; prior work is however limited to the
word and sentence levels, and up to date there is no empirical study
establishing the extent to which this is also true on the discourse level. This
paper explores which textual and perceptual features human annotators perceive
as important for the metaphoricity of discourses and expressions, and addresses
two research questions more specifically. First, is a metaphorically-perceived
discourse more abstract and more emotional in comparison to a
literally-perceived discourse? Second, is a metaphorical expression preceded by
a more metaphorical/abstract/emotional context than a synonymous literal
alternative? We used a dataset of 1,000 corpus-extracted discourses for which
crowdsourced annotators (1) provided judgements on whether they perceived the
discourses as more metaphorical or more literal, and (2) systematically listed
lexical terms which triggered their decisions in (1). Our results indicate that
metaphorical discourses are more emotional and to a certain extent more
abstract than literal discourses. However, neither the metaphoricity nor the
abstractness and emotionality of the preceding discourse seem to play a role in
triggering the choice between synonymous metaphorical vs. literal expressions.
Our dataset is available at
https://www.ims.uni-stuttgart.de/data/discourse-met-lit.
Related papers
- Emotion Rendering for Conversational Speech Synthesis with Heterogeneous
Graph-Based Context Modeling [50.99252242917458]
Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.
To address the issue of data scarcity, we meticulously create emotional labels in terms of category and intensity.
Our model outperforms the baseline models in understanding and rendering emotions.
arXiv Detail & Related papers (2023-12-19T08:47:50Z) - AffectEcho: Speaker Independent and Language-Agnostic Emotion and Affect
Transfer for Speech Synthesis [13.918119853846838]
Affect is an emotional characteristic encompassing valence, arousal, and intensity, and is a crucial attribute for enabling authentic conversations.
We propose AffectEcho, an emotion translation model, that uses a Vector Quantized codebook to model emotions within a quantized space.
We demonstrate the effectiveness of our approach in controlling the emotions of generated speech while preserving identity, style, and emotional cadence unique to each speaker.
arXiv Detail & Related papers (2023-08-16T06:28:29Z) - The Secret of Metaphor on Expressing Stronger Emotion [16.381658893164538]
This paper conducts the first study in exploring how metaphors convey stronger emotion than their literal counterparts.
The more specific property of metaphor can be one of the reasons for metaphors' superiority in emotion expression.
In addition, we observe specificity is crucial in literal language as well, as literal language can express stronger emotion by making it more specific.
arXiv Detail & Related papers (2023-01-30T16:36:02Z) - Speech Synthesis with Mixed Emotions [77.05097999561298]
We propose a novel formulation that measures the relative difference between the speech samples of different emotions.
We then incorporate our formulation into a sequence-to-sequence emotional text-to-speech framework.
At run-time, we control the model to produce the desired emotion mixture by manually defining an emotion attribute vector.
arXiv Detail & Related papers (2022-08-11T15:45:58Z) - What Drives the Use of Metaphorical Language? Negative Insights from
Abstractness, Affect, Discourse Coherence and Contextualized Word
Representations [13.622570558506265]
Given a specific discourse, which discourse properties trigger the use of metaphorical language, rather than using literal alternatives?
Many NLP approaches to metaphorical language rely on cognitive and (psycho-)linguistic insights and have successfully defined models of discourse coherence, abstractness and affect.
In this work, we build five simple models relying on established cognitive and linguistic properties to predict the use of a metaphorical vs. synonymous literal expression in context.
arXiv Detail & Related papers (2022-05-23T08:08:53Z) - Detecting Emotion Carriers by Combining Acoustic and Lexical
Representations [7.225325393598648]
We focus on Emotion Carriers (EC) defined as the segments that best explain the emotional state of the narrator.
EC can provide a richer representation of the user state to improve natural language understanding.
We leverage word-based acoustic and textual embeddings as well as early and late fusion techniques for the detection of ECs in spoken narratives.
arXiv Detail & Related papers (2021-12-13T12:39:53Z) - Textless Speech Emotion Conversion using Decomposed and Discrete
Representations [49.55101900501656]
We decompose speech into discrete and disentangled learned representations, consisting of content units, F0, speaker, and emotion.
First, we modify the speech content by translating the content units to a target emotion, and then predict the prosodic features based on these units.
Finally, the speech waveform is generated by feeding the predicted representations into a neural vocoder.
arXiv Detail & Related papers (2021-11-14T18:16:42Z) - On the Impact of Temporal Representations on Metaphor Detection [1.6959319157216468]
State-of-the-art approaches for metaphor detection compare their literal - or core - meaning and their contextual meaning using sequential metaphor classifiers based on neural networks.
This study examines the metaphor detection task with a detailed exploratory analysis where different temporal and static word embeddings are used to account for different representations of literal meanings.
Results suggest that different word embeddings do impact on the metaphor detection task and some temporal word embeddings slightly outperform static methods on some performance measures.
arXiv Detail & Related papers (2021-11-05T08:43:21Z) - Perspective-taking and Pragmatics for Generating Empathetic Responses
Focused on Emotion Causes [50.569762345799354]
We argue that two issues must be tackled at the same time: (i) identifying which word is the cause for the other's emotion from his or her utterance and (ii) reflecting those specific words in the response generation.
Taking inspiration from social cognition, we leverage a generative estimator to infer emotion cause words from utterances with no word-level label.
arXiv Detail & Related papers (2021-09-18T04:22:49Z) - It's not Rocket Science : Interpreting Figurative Language in Narratives [48.84507467131819]
We study the interpretation of two non-compositional figurative languages (idioms and similes)
Our experiments show that models based solely on pre-trained language models perform substantially worse than humans on these tasks.
We additionally propose knowledge-enhanced models, adopting human strategies for interpreting figurative language.
arXiv Detail & Related papers (2021-08-31T21:46:35Z) - Limited Data Emotional Voice Conversion Leveraging Text-to-Speech:
Two-stage Sequence-to-Sequence Training [91.95855310211176]
Emotional voice conversion aims to change the emotional state of an utterance while preserving the linguistic content and speaker identity.
We propose a novel 2-stage training strategy for sequence-to-sequence emotional voice conversion with a limited amount of emotional speech data.
The proposed framework can perform both spectrum and prosody conversion and achieves significant improvement over the state-of-the-art baselines in both objective and subjective evaluation.
arXiv Detail & Related papers (2021-03-31T04:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.