Metaphor Detection via Explicit Basic Meanings Modelling
- URL: http://arxiv.org/abs/2305.17268v1
- Date: Fri, 26 May 2023 21:25:05 GMT
- Title: Metaphor Detection via Explicit Basic Meanings Modelling
- Authors: Yucheng Li, Shun Wang, Chenghua Lin, Guerin Frank
- Abstract summary: We propose a novel metaphor detection method, which models the basic meaning of the word based on literal annotation from the training set.
Empirical results show that our method outperforms the state-of-the-art method significantly by 1.0% in F1 score.
- Score: 12.096691826237114
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One noticeable trend in metaphor detection is the embrace of linguistic
theories such as the metaphor identification procedure (MIP) for model
architecture design. While MIP clearly defines that the metaphoricity of a
lexical unit is determined based on the contrast between its \textit{contextual
meaning} and its \textit{basic meaning}, existing work does not strictly follow
this principle, typically using the \textit{aggregated meaning} to approximate
the basic meaning of target words. In this paper, we propose a novel metaphor
detection method, which models the basic meaning of the word based on literal
annotation from the training set, and then compares this with the contextual
meaning in a target sentence to identify metaphors. Empirical results show that
our method outperforms the state-of-the-art method significantly by 1.0\% in F1
score. Moreover, our performance even reaches the theoretical upper bound on
the VUA18 benchmark for targets with basic annotations, which demonstrates the
importance of modelling basic meanings for metaphor detection.
Related papers
- An Expectation-Realization Model for Metaphor Detection [3.249167201995207]
We propose a metaphor detection architecture that is structured around two main modules.
An expectation component estimates representations of literal word expectations given a context, and a realization component computes representations of actual word meanings in context.
The overall architecture is trained to learn expectation-realization patterns that characterize metaphorical uses of words.
arXiv Detail & Related papers (2023-11-07T13:03:54Z) - "You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of
Abstract Meaning Representation [60.863629647985526]
We examine the successes and limitations of the GPT-3, ChatGPT, and GPT-4 models in analysis of sentence meaning structure.
We find that models can reliably reproduce the basic format of AMR, and can often capture core event, argument, and modifier structure.
Overall, our findings indicate that these models out-of-the-box can capture aspects of semantic structure, but there remain key limitations in their ability to support fully accurate semantic analyses or parses.
arXiv Detail & Related papers (2023-10-26T21:47:59Z) - ContrastWSD: Enhancing Metaphor Detection with Word Sense Disambiguation Following the Metaphor Identification Procedure [1.03590082373586]
We present a RoBERTa-based metaphor detection model that integrates the Metaphor Identification Procedure (MIP) and Word Sense Disambiguation (WSD)
By utilizing the word senses derived from a WSD model, our model enhances the metaphor detection process and outperforms other methods.
We evaluate our approach on various benchmark datasets and compare it with strong baselines, indicating the effectiveness in advancing metaphor detection.
arXiv Detail & Related papers (2023-09-06T15:41:38Z) - Metaphorical Polysemy Detection: Conventional Metaphor meets Word Sense
Disambiguation [9.860944032009847]
Linguists distinguish between novel and conventional metaphor, a distinction which the metaphor detection task in NLP does not take into account.
In this paper, we investigate the limitations of treating conventional metaphors in this way.
We develop the first MPD model, which learns to identify conventional metaphors in the English WordNet.
arXiv Detail & Related papers (2022-12-16T10:39:22Z) - On the Impact of Temporal Representations on Metaphor Detection [1.6959319157216468]
State-of-the-art approaches for metaphor detection compare their literal - or core - meaning and their contextual meaning using sequential metaphor classifiers based on neural networks.
This study examines the metaphor detection task with a detailed exploratory analysis where different temporal and static word embeddings are used to account for different representations of literal meanings.
Results suggest that different word embeddings do impact on the metaphor detection task and some temporal word embeddings slightly outperform static methods on some performance measures.
arXiv Detail & Related papers (2021-11-05T08:43:21Z) - Metaphor Generation with Conceptual Mappings [58.61307123799594]
We aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs.
We propose to control the generation process by encoding conceptual mappings between cognitive domains.
We show that the unsupervised CM-Lex model is competitive with recent deep learning metaphor generation systems.
arXiv Detail & Related papers (2021-06-02T15:27:05Z) - MelBERT: Metaphor Detection via Contextualized Late Interaction using
Metaphorical Identification Theories [5.625405679356158]
We propose a novel metaphor detection model, namely metaphor-aware late interaction over BERT (MelBERT)
Our model not only leverages contextualized word representation but also benefits from linguistic metaphor identification theories to distinguish between the contextual and literal meaning of words.
arXiv Detail & Related papers (2021-04-28T07:52:01Z) - Understanding Synonymous Referring Expressions via Contrastive Features [105.36814858748285]
We develop an end-to-end trainable framework to learn contrastive features on the image and object instance levels.
We conduct extensive experiments to evaluate the proposed algorithm on several benchmark datasets.
arXiv Detail & Related papers (2021-04-20T17:56:24Z) - Introducing Syntactic Structures into Target Opinion Word Extraction
with Deep Learning [89.64620296557177]
We propose to incorporate the syntactic structures of the sentences into the deep learning models for targeted opinion word extraction.
We also introduce a novel regularization technique to improve the performance of the deep learning models.
The proposed model is extensively analyzed and achieves the state-of-the-art performance on four benchmark datasets.
arXiv Detail & Related papers (2020-10-26T07:13:17Z) - Metaphoric Paraphrase Generation [58.592750281138265]
We use crowdsourcing to evaluate our results, as well as developing an automatic metric for evaluating metaphoric paraphrases.
We show that while the lexical replacement baseline is capable of producing accurate paraphrases, they often lack metaphoricity.
Our metaphor masking model excels in generating metaphoric sentences while performing nearly as well with regard to fluency and paraphrase quality.
arXiv Detail & Related papers (2020-02-28T16:30:33Z) - How Far are We from Effective Context Modeling? An Exploratory Study on
Semantic Parsing in Context [59.13515950353125]
We present a grammar-based decoding semantic parsing and adapt typical context modeling methods on top of it.
We evaluate 13 context modeling methods on two large cross-domain datasets, and our best model achieves state-of-the-art performances.
arXiv Detail & Related papers (2020-02-03T11:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.