What Drives the Use of Metaphorical Language? Negative Insights from
Abstractness, Affect, Discourse Coherence and Contextualized Word
Representations
- URL: http://arxiv.org/abs/2205.11113v1
- Date: Mon, 23 May 2022 08:08:53 GMT
- Title: What Drives the Use of Metaphorical Language? Negative Insights from
Abstractness, Affect, Discourse Coherence and Contextualized Word
Representations
- Authors: Prisca Piccirilli and Sabine Schulte im Walde
- Abstract summary: Given a specific discourse, which discourse properties trigger the use of metaphorical language, rather than using literal alternatives?
Many NLP approaches to metaphorical language rely on cognitive and (psycho-)linguistic insights and have successfully defined models of discourse coherence, abstractness and affect.
In this work, we build five simple models relying on established cognitive and linguistic properties to predict the use of a metaphorical vs. synonymous literal expression in context.
- Score: 13.622570558506265
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Given a specific discourse, which discourse properties trigger the use of
metaphorical language, rather than using literal alternatives? For example,
what drives people to say "grasp the meaning" rather than "understand the
meaning" within a specific context? Many NLP approaches to metaphorical
language rely on cognitive and (psycho-)linguistic insights and have
successfully defined models of discourse coherence, abstractness and affect. In
this work, we build five simple models relying on established cognitive and
linguistic properties -- frequency, abstractness, affect, discourse coherence
and contextualized word representations -- to predict the use of a metaphorical
vs. synonymous literal expression in context. By comparing the models' outputs
to human judgments, our study indicates that our selected properties are not
sufficient to systematically explain metaphorical vs. literal language choices.
Related papers
- That was the last straw, we need more: Are Translation Systems Sensitive
to Disambiguating Context? [64.38544995251642]
We study semantic ambiguities that exist in the source (English in this work) itself.
We focus on idioms that are open to both literal and figurative interpretations.
We find that current MT models consistently translate English idioms literally, even when the context suggests a figurative interpretation.
arXiv Detail & Related papers (2023-10-23T06:38:49Z) - Natural Language Decompositions of Implicit Content Enable Better Text
Representations [56.85319224208865]
We introduce a method for the analysis of text that takes implicitly communicated content explicitly into account.
We use a large language model to produce sets of propositions that are inferentially related to the text that has been observed.
Our results suggest that modeling the meanings behind observed language, rather than the literal text alone, is a valuable direction for NLP.
arXiv Detail & Related papers (2023-05-23T23:45:20Z) - LMs stand their Ground: Investigating the Effect of Embodiment in
Figurative Language Interpretation by Language Models [0.0]
Figurative language is a challenge for language models since its interpretation deviates from their conventional order and meaning.
Yet, humans can easily understand and interpret metaphors as they can be derived from embodied metaphors.
This study shows how larger language models perform better at interpreting metaphoric sentences when the action of the metaphorical sentence is more embodied.
arXiv Detail & Related papers (2023-05-05T11:44:12Z) - Shades of meaning: Uncovering the geometry of ambiguous word
representations through contextualised language models [6.760960482418417]
Lexical ambiguity presents a profound and enduring challenge to the language sciences.
Our work offers new insight into psychological understanding of lexical ambiguity through a series of simulations.
arXiv Detail & Related papers (2023-04-26T14:47:38Z) - Transparency Helps Reveal When Language Models Learn Meaning [71.96920839263457]
Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations, both autoregressive and masked language models learn to emulate semantic relations between expressions.
Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not well-represent natural language semantics.
arXiv Detail & Related papers (2022-10-14T02:35:19Z) - Are Representations Built from the Ground Up? An Empirical Examination
of Local Composition in Language Models [91.3755431537592]
Representing compositional and non-compositional phrases is critical for language understanding.
We first formulate a problem of predicting the LM-internal representations of longer phrases given those of their constituents.
While we would expect the predictive accuracy to correlate with human judgments of semantic compositionality, we find this is largely not the case.
arXiv Detail & Related papers (2022-10-07T14:21:30Z) - Features of Perceived Metaphoricity on the Discourse Level: Abstractness
and Emotionality [13.622570558506265]
Research on metaphorical language has shown ties between abstractness and emotionality with regard to metaphoricity.
This paper explores which textual and perceptual features human annotators perceive as important for the metaphoricity of discourse.
arXiv Detail & Related papers (2022-05-18T14:09:10Z) - Testing the Ability of Language Models to Interpret Figurative Language [69.59943454934799]
Figurative and metaphorical language are commonplace in discourse.
It remains an open question to what extent modern language models can interpret nonliteral phrases.
We introduce Fig-QA, a Winograd-style nonliteral language understanding task.
arXiv Detail & Related papers (2022-04-26T23:42:22Z) - On the Impact of Temporal Representations on Metaphor Detection [1.6959319157216468]
State-of-the-art approaches for metaphor detection compare their literal - or core - meaning and their contextual meaning using sequential metaphor classifiers based on neural networks.
This study examines the metaphor detection task with a detailed exploratory analysis where different temporal and static word embeddings are used to account for different representations of literal meanings.
Results suggest that different word embeddings do impact on the metaphor detection task and some temporal word embeddings slightly outperform static methods on some performance measures.
arXiv Detail & Related papers (2021-11-05T08:43:21Z) - It's not Rocket Science : Interpreting Figurative Language in Narratives [48.84507467131819]
We study the interpretation of two non-compositional figurative languages (idioms and similes)
Our experiments show that models based solely on pre-trained language models perform substantially worse than humans on these tasks.
We additionally propose knowledge-enhanced models, adopting human strategies for interpreting figurative language.
arXiv Detail & Related papers (2021-08-31T21:46:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.