MelBERT: Metaphor Detection via Contextualized Late Interaction using
Metaphorical Identification Theories
- URL: http://arxiv.org/abs/2104.13615v1
- Date: Wed, 28 Apr 2021 07:52:01 GMT
- Title: MelBERT: Metaphor Detection via Contextualized Late Interaction using
Metaphorical Identification Theories
- Authors: Minjin Choi, Sunkyung Lee, Eunseong Choi, Heesoo Park, Junhyuk Lee,
Dongwon Lee, and Jongwuk Lee
- Abstract summary: We propose a novel metaphor detection model, namely metaphor-aware late interaction over BERT (MelBERT)
Our model not only leverages contextualized word representation but also benefits from linguistic metaphor identification theories to distinguish between the contextual and literal meaning of words.
- Score: 5.625405679356158
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated metaphor detection is a challenging task to identify metaphorical
expressions of words in a sentence. To tackle this problem, we adopt
pre-trained contextualized models, e.g., BERT and RoBERTa. To this end, we
propose a novel metaphor detection model, namely metaphor-aware late
interaction over BERT (MelBERT). Our model not only leverages contextualized
word representation but also benefits from linguistic metaphor identification
theories to distinguish between the contextual and literal meaning of words.
Our empirical results demonstrate that MelBERT outperforms several strong
baselines on four benchmark datasets, i.e., VUA-18, VUA-20, MOH-X, and TroFi.
Related papers
- Enhancing Metaphor Detection through Soft Labels and Target Word Prediction [3.7676096626244986]
We develop a prompt learning framework specifically designed for metaphor detection.
We also introduce a teacher model to generate valuable soft labels.
Experimental results demonstrate that our model has achieved state-of-the-art performance.
arXiv Detail & Related papers (2024-03-27T04:51:42Z) - Finding Challenging Metaphors that Confuse Pretrained Language Models [21.553915781660905]
It remains unclear what types of metaphors challenge current state-of-the-art NLP models.
To identify hard metaphors, we propose an automatic pipeline that identifies metaphors that challenge a particular model.
Our analysis demonstrates that our detected hard metaphors contrast significantly with VUA and reduce the accuracy of machine translation by 16%.
arXiv Detail & Related papers (2024-01-29T10:00:54Z) - That was the last straw, we need more: Are Translation Systems Sensitive
to Disambiguating Context? [64.38544995251642]
We study semantic ambiguities that exist in the source (English in this work) itself.
We focus on idioms that are open to both literal and figurative interpretations.
We find that current MT models consistently translate English idioms literally, even when the context suggests a figurative interpretation.
arXiv Detail & Related papers (2023-10-23T06:38:49Z) - ContrastWSD: Enhancing Metaphor Detection with Word Sense Disambiguation Following the Metaphor Identification Procedure [1.03590082373586]
We present a RoBERTa-based metaphor detection model that integrates the Metaphor Identification Procedure (MIP) and Word Sense Disambiguation (WSD)
By utilizing the word senses derived from a WSD model, our model enhances the metaphor detection process and outperforms other methods.
We evaluate our approach on various benchmark datasets and compare it with strong baselines, indicating the effectiveness in advancing metaphor detection.
arXiv Detail & Related papers (2023-09-06T15:41:38Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - Metaphorical Polysemy Detection: Conventional Metaphor meets Word Sense
Disambiguation [9.860944032009847]
Linguists distinguish between novel and conventional metaphor, a distinction which the metaphor detection task in NLP does not take into account.
In this paper, we investigate the limitations of treating conventional metaphors in this way.
We develop the first MPD model, which learns to identify conventional metaphors in the English WordNet.
arXiv Detail & Related papers (2022-12-16T10:39:22Z) - Exploring Multi-Modal Representations for Ambiguity Detection &
Coreference Resolution in the SIMMC 2.0 Challenge [60.616313552585645]
We present models for effective Ambiguity Detection and Coreference Resolution in Conversational AI.
Specifically, we use TOD-BERT and LXMERT based models, compare them to a number of baselines and provide ablation experiments.
Our results show that (1) language models are able to exploit correlations in the data to detect ambiguity; and (2) unimodal coreference resolution models can avoid the need for a vision component.
arXiv Detail & Related papers (2022-02-25T12:10:02Z) - Metaphor Generation with Conceptual Mappings [58.61307123799594]
We aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs.
We propose to control the generation process by encoding conceptual mappings between cognitive domains.
We show that the unsupervised CM-Lex model is competitive with recent deep learning metaphor generation systems.
arXiv Detail & Related papers (2021-06-02T15:27:05Z) - MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding [22.756157298168127]
Based on a theoretically-grounded connection between metaphors and symbols, we propose a method to automatically construct a parallel corpus.
For the generation task, we incorporate a metaphor discriminator to guide the decoding of a sequence to sequence model fine-tuned on our parallel data.
A task-based evaluation shows that human-written poems enhanced with metaphors are preferred 68% of the time compared to poems without metaphors.
arXiv Detail & Related papers (2021-03-11T16:39:19Z) - Metaphoric Paraphrase Generation [58.592750281138265]
We use crowdsourcing to evaluate our results, as well as developing an automatic metric for evaluating metaphoric paraphrases.
We show that while the lexical replacement baseline is capable of producing accurate paraphrases, they often lack metaphoricity.
Our metaphor masking model excels in generating metaphoric sentences while performing nearly as well with regard to fluency and paraphrase quality.
arXiv Detail & Related papers (2020-02-28T16:30:33Z) - A Deep Neural Framework for Contextual Affect Detection [51.378225388679425]
A short and simple text carrying no emotion can represent some strong emotions when reading along with its context.
We propose a Contextual Affect Detection framework which learns the inter-dependence of words in a sentence.
arXiv Detail & Related papers (2020-01-28T05:03:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.