An Expectation-Realization Model for Metaphor Detection
- URL: http://arxiv.org/abs/2311.03963v1
- Date: Tue, 7 Nov 2023 13:03:54 GMT
- Title: An Expectation-Realization Model for Metaphor Detection
- Authors: Oseremen O. Uduehi and Razvan C. Bunescu
- Abstract summary: We propose a metaphor detection architecture that is structured around two main modules.
An expectation component estimates representations of literal word expectations given a context, and a realization component computes representations of actual word meanings in context.
The overall architecture is trained to learn expectation-realization patterns that characterize metaphorical uses of words.
- Score: 3.249167201995207
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a metaphor detection architecture that is structured around two
main modules: an expectation component that estimates representations of
literal word expectations given a context, and a realization component that
computes representations of actual word meanings in context. The overall
architecture is trained to learn expectation-realization (ER) patterns that
characterize metaphorical uses of words. When evaluated on three metaphor
datasets for within distribution, out of distribution, and novel metaphor
generalization, the proposed method is shown to obtain results that are
competitive or better than state-of-the art. Further increases in metaphor
detection accuracy are obtained through ensembling of ER models.
Related papers
- Towards a Fully Interpretable and More Scalable RSA Model for Metaphor Understanding [0.8437187555622164]
The Rational Speech Act (RSA) model provides a flexible framework to model pragmatic reasoning in computational terms.
Here, we introduce a new RSA framework for metaphor understanding that addresses limitations by providing an explicit formula.
The model was tested against 24 metaphors, not limited to the conventional $textitJohn-is-a-shark$ type.
arXiv Detail & Related papers (2024-04-03T18:09:33Z) - ContrastWSD: Enhancing Metaphor Detection with Word Sense Disambiguation Following the Metaphor Identification Procedure [1.03590082373586]
We present a RoBERTa-based metaphor detection model that integrates the Metaphor Identification Procedure (MIP) and Word Sense Disambiguation (WSD)
By utilizing the word senses derived from a WSD model, our model enhances the metaphor detection process and outperforms other methods.
We evaluate our approach on various benchmark datasets and compare it with strong baselines, indicating the effectiveness in advancing metaphor detection.
arXiv Detail & Related papers (2023-09-06T15:41:38Z) - Metaphor Detection via Explicit Basic Meanings Modelling [12.096691826237114]
We propose a novel metaphor detection method, which models the basic meaning of the word based on literal annotation from the training set.
Empirical results show that our method outperforms the state-of-the-art method significantly by 1.0% in F1 score.
arXiv Detail & Related papers (2023-05-26T21:25:05Z) - Compositional Generalization in Grounded Language Learning via Induced
Model Sparsity [81.38804205212425]
We consider simple language-conditioned navigation problems in a grid world environment with disentangled observations.
We design an agent that encourages sparse correlations between words in the instruction and attributes of objects, composing them together to find the goal.
Our agent maintains a high level of performance on goals containing novel combinations of properties even when learning from a handful of demonstrations.
arXiv Detail & Related papers (2022-07-06T08:46:27Z) - Did the Cat Drink the Coffee? Challenging Transformers with Generalized
Event Knowledge [59.22170796793179]
Transformers Language Models (TLMs) were tested on a benchmark for the textitdynamic estimation of thematic fit
Our results show that TLMs can reach performances that are comparable to those achieved by SDM.
However, additional analysis consistently suggests that TLMs do not capture important aspects of event knowledge.
arXiv Detail & Related papers (2021-07-22T20:52:26Z) - Metaphor Generation with Conceptual Mappings [58.61307123799594]
We aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs.
We propose to control the generation process by encoding conceptual mappings between cognitive domains.
We show that the unsupervised CM-Lex model is competitive with recent deep learning metaphor generation systems.
arXiv Detail & Related papers (2021-06-02T15:27:05Z) - Discrete representations in neural models of spoken language [56.29049879393466]
We compare the merits of four commonly used metrics in the context of weakly supervised models of spoken language.
We find that the different evaluation metrics can give inconsistent results.
arXiv Detail & Related papers (2021-05-12T11:02:02Z) - Introducing Syntactic Structures into Target Opinion Word Extraction
with Deep Learning [89.64620296557177]
We propose to incorporate the syntactic structures of the sentences into the deep learning models for targeted opinion word extraction.
We also introduce a novel regularization technique to improve the performance of the deep learning models.
The proposed model is extensively analyzed and achieves the state-of-the-art performance on four benchmark datasets.
arXiv Detail & Related papers (2020-10-26T07:13:17Z) - Contextual Modulation for Relation-Level Metaphor Identification [3.2619536457181075]
We introduce a novel architecture for identifying relation-level metaphoric expressions of certain grammatical relations.
In a methodology inspired by works in visual reasoning, our approach is based on conditioning the neural network computation on the deep contextualised features.
We demonstrate that the proposed architecture achieves state-of-the-art results on benchmark datasets.
arXiv Detail & Related papers (2020-10-12T12:07:02Z) - How Far are We from Effective Context Modeling? An Exploratory Study on
Semantic Parsing in Context [59.13515950353125]
We present a grammar-based decoding semantic parsing and adapt typical context modeling methods on top of it.
We evaluate 13 context modeling methods on two large cross-domain datasets, and our best model achieves state-of-the-art performances.
arXiv Detail & Related papers (2020-02-03T11:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.