ContrastWSD: Enhancing Metaphor Detection with Word Sense Disambiguation Following the Metaphor Identification Procedure
- URL: http://arxiv.org/abs/2309.03103v2
- Date: Sat, 23 Mar 2024 03:15:42 GMT
- Title: ContrastWSD: Enhancing Metaphor Detection with Word Sense Disambiguation Following the Metaphor Identification Procedure
- Authors: Mohamad Elzohbi, Richard Zhao,
- Abstract summary: We present a RoBERTa-based metaphor detection model that integrates the Metaphor Identification Procedure (MIP) and Word Sense Disambiguation (WSD)
By utilizing the word senses derived from a WSD model, our model enhances the metaphor detection process and outperforms other methods.
We evaluate our approach on various benchmark datasets and compare it with strong baselines, indicating the effectiveness in advancing metaphor detection.
- Score: 1.03590082373586
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents ContrastWSD, a RoBERTa-based metaphor detection model that integrates the Metaphor Identification Procedure (MIP) and Word Sense Disambiguation (WSD) to extract and contrast the contextual meaning with the basic meaning of a word to determine whether it is used metaphorically in a sentence. By utilizing the word senses derived from a WSD model, our model enhances the metaphor detection process and outperforms other methods that rely solely on contextual embeddings or integrate only the basic definitions and other external knowledge. We evaluate our approach on various benchmark datasets and compare it with strong baselines, indicating the effectiveness in advancing metaphor detection.
Related papers
- Conjuring Semantic Similarity [59.18714889874088]
The semantic similarity between two textual expressions measures the distance between their latent'meaning'
We propose a novel approach whereby the semantic similarity among textual expressions is based not on other expressions they can be rephrased as, but rather based on the imagery they evoke.
Our method contributes a novel perspective on semantic similarity that not only aligns with human-annotated scores, but also opens up new avenues for the evaluation of text-conditioned generative models.
arXiv Detail & Related papers (2024-10-21T18:51:34Z) - Metaphor Detection via Explicit Basic Meanings Modelling [12.096691826237114]
We propose a novel metaphor detection method, which models the basic meaning of the word based on literal annotation from the training set.
Empirical results show that our method outperforms the state-of-the-art method significantly by 1.0% in F1 score.
arXiv Detail & Related papers (2023-05-26T21:25:05Z) - Metaphorical Polysemy Detection: Conventional Metaphor meets Word Sense
Disambiguation [9.860944032009847]
Linguists distinguish between novel and conventional metaphor, a distinction which the metaphor detection task in NLP does not take into account.
In this paper, we investigate the limitations of treating conventional metaphors in this way.
We develop the first MPD model, which learns to identify conventional metaphors in the English WordNet.
arXiv Detail & Related papers (2022-12-16T10:39:22Z) - On the Impact of Temporal Representations on Metaphor Detection [1.6959319157216468]
State-of-the-art approaches for metaphor detection compare their literal - or core - meaning and their contextual meaning using sequential metaphor classifiers based on neural networks.
This study examines the metaphor detection task with a detailed exploratory analysis where different temporal and static word embeddings are used to account for different representations of literal meanings.
Results suggest that different word embeddings do impact on the metaphor detection task and some temporal word embeddings slightly outperform static methods on some performance measures.
arXiv Detail & Related papers (2021-11-05T08:43:21Z) - Meta-Learning with Variational Semantic Memory for Word Sense
Disambiguation [56.830395467247016]
We propose a model of semantic memory for WSD in a meta-learning setting.
Our model is based on hierarchical variational inference and incorporates an adaptive memory update rule via a hypernetwork.
We show our model advances the state of the art in few-shot WSD, supports effective learning in extremely data scarce scenarios.
arXiv Detail & Related papers (2021-06-05T20:40:01Z) - Metaphor Generation with Conceptual Mappings [58.61307123799594]
We aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs.
We propose to control the generation process by encoding conceptual mappings between cognitive domains.
We show that the unsupervised CM-Lex model is competitive with recent deep learning metaphor generation systems.
arXiv Detail & Related papers (2021-06-02T15:27:05Z) - MelBERT: Metaphor Detection via Contextualized Late Interaction using
Metaphorical Identification Theories [5.625405679356158]
We propose a novel metaphor detection model, namely metaphor-aware late interaction over BERT (MelBERT)
Our model not only leverages contextualized word representation but also benefits from linguistic metaphor identification theories to distinguish between the contextual and literal meaning of words.
arXiv Detail & Related papers (2021-04-28T07:52:01Z) - Understanding Synonymous Referring Expressions via Contrastive Features [105.36814858748285]
We develop an end-to-end trainable framework to learn contrastive features on the image and object instance levels.
We conduct extensive experiments to evaluate the proposed algorithm on several benchmark datasets.
arXiv Detail & Related papers (2021-04-20T17:56:24Z) - Introducing Syntactic Structures into Target Opinion Word Extraction
with Deep Learning [89.64620296557177]
We propose to incorporate the syntactic structures of the sentences into the deep learning models for targeted opinion word extraction.
We also introduce a novel regularization technique to improve the performance of the deep learning models.
The proposed model is extensively analyzed and achieves the state-of-the-art performance on four benchmark datasets.
arXiv Detail & Related papers (2020-10-26T07:13:17Z) - Metaphor Detection using Deep Contextualized Word Embeddings [0.0]
We present an end-to-end method composed of deep contextualized word embeddings, bidirectional LSTMs and multi-head attention mechanism.
Our method requires only the raw text sequences as input features to detect the metaphoricity of a phrase.
arXiv Detail & Related papers (2020-09-26T11:00:35Z) - Metaphoric Paraphrase Generation [58.592750281138265]
We use crowdsourcing to evaluate our results, as well as developing an automatic metric for evaluating metaphoric paraphrases.
We show that while the lexical replacement baseline is capable of producing accurate paraphrases, they often lack metaphoricity.
Our metaphor masking model excels in generating metaphoric sentences while performing nearly as well with regard to fluency and paraphrase quality.
arXiv Detail & Related papers (2020-02-28T16:30:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.