LMs stand their Ground: Investigating the Effect of Embodiment in
Figurative Language Interpretation by Language Models
- URL: http://arxiv.org/abs/2305.03445v4
- Date: Thu, 15 Jun 2023 12:33:37 GMT
- Title: LMs stand their Ground: Investigating the Effect of Embodiment in
Figurative Language Interpretation by Language Models
- Authors: Philipp Wicke
- Abstract summary: Figurative language is a challenge for language models since its interpretation deviates from their conventional order and meaning.
Yet, humans can easily understand and interpret metaphors as they can be derived from embodied metaphors.
This study shows how larger language models perform better at interpreting metaphoric sentences when the action of the metaphorical sentence is more embodied.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Figurative language is a challenge for language models since its
interpretation is based on the use of words in a way that deviates from their
conventional order and meaning. Yet, humans can easily understand and interpret
metaphors, similes or idioms as they can be derived from embodied metaphors.
Language is a proxy for embodiment and if a metaphor is conventional and
lexicalised, it becomes easier for a system without a body to make sense of
embodied concepts. Yet, the intricate relation between embodiment and features
such as concreteness or age of acquisition has not been studied in the context
of figurative language interpretation concerning language models. Hence, the
presented study shows how larger language models perform better at interpreting
metaphoric sentences when the action of the metaphorical sentence is more
embodied. The analysis rules out multicollinearity with other features (e.g.
word length or concreteness) and provides initial evidence that larger language
models conceptualise embodied concepts to a degree that facilitates figurative
language understanding.
Related papers
- Science is Exploration: Computational Frontiers for Conceptual Metaphor Theory [0.0]
We show that Large Language Models (LLMs) can accurately identify and explain the presence of conceptual metaphors in natural language data.
Using a novel prompting technique based on metaphor annotation guidelines, we demonstrate that LLMs are a promising tool for large-scale computational research on conceptual metaphors.
arXiv Detail & Related papers (2024-10-11T17:03:13Z) - Meta4XNLI: A Crosslingual Parallel Corpus for Metaphor Detection and Interpretation [6.0158981171030685]
We present a novel parallel dataset for the tasks of metaphor detection and interpretation that contains metaphor annotations in both Spanish and English.
We investigate language models' metaphor identification and understanding abilities through a series of monolingual and cross-lingual experiments.
arXiv Detail & Related papers (2024-04-10T14:44:48Z) - Multi-lingual and Multi-cultural Figurative Language Understanding [69.47641938200817]
Figurative language permeates human communication, but is relatively understudied in NLP.
We create a dataset for seven diverse languages associated with a variety of cultures: Hindi, Indonesian, Javanese, Kannada, Sundanese, Swahili and Yoruba.
Our dataset reveals that each language relies on cultural and regional concepts for figurative expressions, with the highest overlap between languages originating from the same region.
All languages exhibit a significant deficiency compared to English, with variations in performance reflecting the availability of pre-training and fine-tuning data.
arXiv Detail & Related papers (2023-05-25T15:30:31Z) - Transparency Helps Reveal When Language Models Learn Meaning [71.96920839263457]
Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations, both autoregressive and masked language models learn to emulate semantic relations between expressions.
Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not well-represent natural language semantics.
arXiv Detail & Related papers (2022-10-14T02:35:19Z) - What Drives the Use of Metaphorical Language? Negative Insights from
Abstractness, Affect, Discourse Coherence and Contextualized Word
Representations [13.622570558506265]
Given a specific discourse, which discourse properties trigger the use of metaphorical language, rather than using literal alternatives?
Many NLP approaches to metaphorical language rely on cognitive and (psycho-)linguistic insights and have successfully defined models of discourse coherence, abstractness and affect.
In this work, we build five simple models relying on established cognitive and linguistic properties to predict the use of a metaphorical vs. synonymous literal expression in context.
arXiv Detail & Related papers (2022-05-23T08:08:53Z) - Testing the Ability of Language Models to Interpret Figurative Language [69.59943454934799]
Figurative and metaphorical language are commonplace in discourse.
It remains an open question to what extent modern language models can interpret nonliteral phrases.
We introduce Fig-QA, a Winograd-style nonliteral language understanding task.
arXiv Detail & Related papers (2022-04-26T23:42:22Z) - Interpreting Language Models with Contrastive Explanations [99.7035899290924]
Language models must consider various features to predict a token, such as its part of speech, number, tense, or semantics.
Existing explanation methods conflate evidence for all these features into a single explanation, which is less interpretable for human understanding.
We show that contrastive explanations are quantifiably better than non-contrastive explanations in verifying major grammatical phenomena.
arXiv Detail & Related papers (2022-02-21T18:32:24Z) - It's not Rocket Science : Interpreting Figurative Language in Narratives [48.84507467131819]
We study the interpretation of two non-compositional figurative languages (idioms and similes)
Our experiments show that models based solely on pre-trained language models perform substantially worse than humans on these tasks.
We additionally propose knowledge-enhanced models, adopting human strategies for interpreting figurative language.
arXiv Detail & Related papers (2021-08-31T21:46:35Z) - Probing Contextual Language Models for Common Ground with Visual
Representations [76.05769268286038]
We design a probing model that evaluates how effective are text-only representations in distinguishing between matching and non-matching visual representations.
Our findings show that language representations alone provide a strong signal for retrieving image patches from the correct object categories.
Visually grounded language models slightly outperform text-only language models in instance retrieval, but greatly under-perform humans.
arXiv Detail & Related papers (2020-05-01T21:28:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.