Metaphorical Polysemy Detection: Conventional Metaphor meets Word Sense
Disambiguation
- URL: http://arxiv.org/abs/2212.08395v1
- Date: Fri, 16 Dec 2022 10:39:22 GMT
- Title: Metaphorical Polysemy Detection: Conventional Metaphor meets Word Sense
Disambiguation
- Authors: Rowan Hall Maudslay and Simone Teufel
- Abstract summary: Linguists distinguish between novel and conventional metaphor, a distinction which the metaphor detection task in NLP does not take into account.
In this paper, we investigate the limitations of treating conventional metaphors in this way.
We develop the first MPD model, which learns to identify conventional metaphors in the English WordNet.
- Score: 9.860944032009847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Linguists distinguish between novel and conventional metaphor, a distinction
which the metaphor detection task in NLP does not take into account. Instead,
metaphoricity is formulated as a property of a token in a sentence, regardless
of metaphor type. In this paper, we investigate the limitations of treating
conventional metaphors in this way, and advocate for an alternative which we
name 'metaphorical polysemy detection' (MPD). In MPD, only conventional
metaphoricity is treated, and it is formulated as a property of word senses in
a lexicon. We develop the first MPD model, which learns to identify
conventional metaphors in the English WordNet. To train it, we present a novel
training procedure that combines metaphor detection with word sense
disambiguation (WSD). For evaluation, we manually annotate metaphor in two
subsets of WordNet. Our model significantly outperforms a strong baseline based
on a state-of-the-art metaphor detection model, attaining an ROC-AUC score of
.78 (compared to .65) on one of the sets. Additionally, when paired with a WSD
model, our approach outperforms a state-of-the-art metaphor detection model at
identifying conventional metaphors in text (.659 F1 compared to .626).
Related papers
- Unveiling the Invisible: Captioning Videos with Metaphors [43.53477124719281]
We introduce a new Vision-Language (VL) task of describing the metaphors present in the videos in our work.
To facilitate this novel task, we construct and release a dataset with 705 videos and 2115 human-written captions.
We also propose a novel low-resource video metaphor captioning system: GIT-LLaVA, which obtains comparable performance to SoTA video language models on the proposed task.
arXiv Detail & Related papers (2024-06-07T12:32:44Z) - Finding Challenging Metaphors that Confuse Pretrained Language Models [21.553915781660905]
It remains unclear what types of metaphors challenge current state-of-the-art NLP models.
To identify hard metaphors, we propose an automatic pipeline that identifies metaphors that challenge a particular model.
Our analysis demonstrates that our detected hard metaphors contrast significantly with VUA and reduce the accuracy of machine translation by 16%.
arXiv Detail & Related papers (2024-01-29T10:00:54Z) - ContrastWSD: Enhancing Metaphor Detection with Word Sense Disambiguation Following the Metaphor Identification Procedure [1.03590082373586]
We present a RoBERTa-based metaphor detection model that integrates the Metaphor Identification Procedure (MIP) and Word Sense Disambiguation (WSD)
By utilizing the word senses derived from a WSD model, our model enhances the metaphor detection process and outperforms other methods.
We evaluate our approach on various benchmark datasets and compare it with strong baselines, indicating the effectiveness in advancing metaphor detection.
arXiv Detail & Related papers (2023-09-06T15:41:38Z) - Metaphor Detection via Explicit Basic Meanings Modelling [12.096691826237114]
We propose a novel metaphor detection method, which models the basic meaning of the word based on literal annotation from the training set.
Empirical results show that our method outperforms the state-of-the-art method significantly by 1.0% in F1 score.
arXiv Detail & Related papers (2023-05-26T21:25:05Z) - Testing the Ability of Language Models to Interpret Figurative Language [69.59943454934799]
Figurative and metaphorical language are commonplace in discourse.
It remains an open question to what extent modern language models can interpret nonliteral phrases.
We introduce Fig-QA, a Winograd-style nonliteral language understanding task.
arXiv Detail & Related papers (2022-04-26T23:42:22Z) - It's not Rocket Science : Interpreting Figurative Language in Narratives [48.84507467131819]
We study the interpretation of two non-compositional figurative languages (idioms and similes)
Our experiments show that models based solely on pre-trained language models perform substantially worse than humans on these tasks.
We additionally propose knowledge-enhanced models, adopting human strategies for interpreting figurative language.
arXiv Detail & Related papers (2021-08-31T21:46:35Z) - Metaphor Generation with Conceptual Mappings [58.61307123799594]
We aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs.
We propose to control the generation process by encoding conceptual mappings between cognitive domains.
We show that the unsupervised CM-Lex model is competitive with recent deep learning metaphor generation systems.
arXiv Detail & Related papers (2021-06-02T15:27:05Z) - MelBERT: Metaphor Detection via Contextualized Late Interaction using
Metaphorical Identification Theories [5.625405679356158]
We propose a novel metaphor detection model, namely metaphor-aware late interaction over BERT (MelBERT)
Our model not only leverages contextualized word representation but also benefits from linguistic metaphor identification theories to distinguish between the contextual and literal meaning of words.
arXiv Detail & Related papers (2021-04-28T07:52:01Z) - MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding [22.756157298168127]
Based on a theoretically-grounded connection between metaphors and symbols, we propose a method to automatically construct a parallel corpus.
For the generation task, we incorporate a metaphor discriminator to guide the decoding of a sequence to sequence model fine-tuned on our parallel data.
A task-based evaluation shows that human-written poems enhanced with metaphors are preferred 68% of the time compared to poems without metaphors.
arXiv Detail & Related papers (2021-03-11T16:39:19Z) - Generating similes effortlessly like a Pro: A Style Transfer Approach
for Simile Generation [65.22565071742528]
Figurative language such as a simile go beyond plain expressions to give readers new insights and inspirations.
Generating a simile requires proper understanding for effective mapping of properties between two concepts.
We show how replacing literal sentences with similes from our best model in machine generated stories improves evocativeness and leads to better acceptance by human judges.
arXiv Detail & Related papers (2020-09-18T17:37:13Z) - Metaphoric Paraphrase Generation [58.592750281138265]
We use crowdsourcing to evaluate our results, as well as developing an automatic metric for evaluating metaphoric paraphrases.
We show that while the lexical replacement baseline is capable of producing accurate paraphrases, they often lack metaphoricity.
Our metaphor masking model excels in generating metaphoric sentences while performing nearly as well with regard to fluency and paraphrase quality.
arXiv Detail & Related papers (2020-02-28T16:30:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.