Verifying Claims About Metaphors with Large-Scale Automatic Metaphor Identification
- URL: http://arxiv.org/abs/2404.01029v1
- Date: Mon, 1 Apr 2024 10:17:45 GMT
- Title: Verifying Claims About Metaphors with Large-Scale Automatic Metaphor Identification
- Authors: Kotaro Aono, Ryohei Sasano, Koichi Takeda,
- Abstract summary: This study entails a large-scale, corpus-based analysis of certain existing claims about verb metaphors, by applying metaphor detection to sentences extracted from Common Crawl.
The verification results indicate that the direct objects of verbs used as metaphors tend to have lower degrees of concreteness, imageability, and familiarity, and that metaphors are more likely to be used in emotional and subjective sentences.
- Score: 14.143299702954023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There are several linguistic claims about situations where words are more likely to be used as metaphors. However, few studies have sought to verify such claims with large corpora. This study entails a large-scale, corpus-based analysis of certain existing claims about verb metaphors, by applying metaphor detection to sentences extracted from Common Crawl and using the statistics obtained from the results. The verification results indicate that the direct objects of verbs used as metaphors tend to have lower degrees of concreteness, imageability, and familiarity, and that metaphors are more likely to be used in emotional and subjective sentences.
Related papers
- Finding Challenging Metaphors that Confuse Pretrained Language Models [21.553915781660905]
It remains unclear what types of metaphors challenge current state-of-the-art NLP models.
To identify hard metaphors, we propose an automatic pipeline that identifies metaphors that challenge a particular model.
Our analysis demonstrates that our detected hard metaphors contrast significantly with VUA and reduce the accuracy of machine translation by 16%.
arXiv Detail & Related papers (2024-01-29T10:00:54Z) - That was the last straw, we need more: Are Translation Systems Sensitive
to Disambiguating Context? [64.38544995251642]
We study semantic ambiguities that exist in the source (English in this work) itself.
We focus on idioms that are open to both literal and figurative interpretations.
We find that current MT models consistently translate English idioms literally, even when the context suggests a figurative interpretation.
arXiv Detail & Related papers (2023-10-23T06:38:49Z) - LMs stand their Ground: Investigating the Effect of Embodiment in
Figurative Language Interpretation by Language Models [0.0]
Figurative language is a challenge for language models since its interpretation deviates from their conventional order and meaning.
Yet, humans can easily understand and interpret metaphors as they can be derived from embodied metaphors.
This study shows how larger language models perform better at interpreting metaphoric sentences when the action of the metaphorical sentence is more embodied.
arXiv Detail & Related papers (2023-05-05T11:44:12Z) - Neighboring Words Affect Human Interpretation of Saliency Explanations [65.29015910991261]
Word-level saliency explanations are often used to communicate feature-attribution in text-based models.
Recent studies found that superficial factors such as word length can distort human interpretation of the communicated saliency scores.
We investigate how the marking of a word's neighboring words affect the explainee's perception of the word's importance in the context of a saliency explanation.
arXiv Detail & Related papers (2023-05-04T09:50:25Z) - The Secret of Metaphor on Expressing Stronger Emotion [16.381658893164538]
This paper conducts the first study in exploring how metaphors convey stronger emotion than their literal counterparts.
The more specific property of metaphor can be one of the reasons for metaphors' superiority in emotion expression.
In addition, we observe specificity is crucial in literal language as well, as literal language can express stronger emotion by making it more specific.
arXiv Detail & Related papers (2023-01-30T16:36:02Z) - Are Representations Built from the Ground Up? An Empirical Examination
of Local Composition in Language Models [91.3755431537592]
Representing compositional and non-compositional phrases is critical for language understanding.
We first formulate a problem of predicting the LM-internal representations of longer phrases given those of their constituents.
While we would expect the predictive accuracy to correlate with human judgments of semantic compositionality, we find this is largely not the case.
arXiv Detail & Related papers (2022-10-07T14:21:30Z) - Testing the Ability of Language Models to Interpret Figurative Language [69.59943454934799]
Figurative and metaphorical language are commonplace in discourse.
It remains an open question to what extent modern language models can interpret nonliteral phrases.
We introduce Fig-QA, a Winograd-style nonliteral language understanding task.
arXiv Detail & Related papers (2022-04-26T23:42:22Z) - On the Impact of Temporal Representations on Metaphor Detection [1.6959319157216468]
State-of-the-art approaches for metaphor detection compare their literal - or core - meaning and their contextual meaning using sequential metaphor classifiers based on neural networks.
This study examines the metaphor detection task with a detailed exploratory analysis where different temporal and static word embeddings are used to account for different representations of literal meanings.
Results suggest that different word embeddings do impact on the metaphor detection task and some temporal word embeddings slightly outperform static methods on some performance measures.
arXiv Detail & Related papers (2021-11-05T08:43:21Z) - It's not Rocket Science : Interpreting Figurative Language in Narratives [48.84507467131819]
We study the interpretation of two non-compositional figurative languages (idioms and similes)
Our experiments show that models based solely on pre-trained language models perform substantially worse than humans on these tasks.
We additionally propose knowledge-enhanced models, adopting human strategies for interpreting figurative language.
arXiv Detail & Related papers (2021-08-31T21:46:35Z) - How Metaphors Impact Political Discourse: A Large-Scale Topic-Agnostic
Study Using Neural Metaphor Detection [29.55309950026882]
We present a large-scale data-driven study of metaphors used in political discourse.
We show that metaphor use correlates with ideological leanings in complex ways that depend on concurrent political events such as winning or losing elections.
We show that posts with metaphors elicit more engagement from their audience overall even after controlling for various socio-political factors such as gender and political party affiliation.
arXiv Detail & Related papers (2021-04-08T17:16:31Z) - Metaphoric Paraphrase Generation [58.592750281138265]
We use crowdsourcing to evaluate our results, as well as developing an automatic metric for evaluating metaphoric paraphrases.
We show that while the lexical replacement baseline is capable of producing accurate paraphrases, they often lack metaphoricity.
Our metaphor masking model excels in generating metaphoric sentences while performing nearly as well with regard to fluency and paraphrase quality.
arXiv Detail & Related papers (2020-02-28T16:30:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.