On the Impact of Temporal Representations on Metaphor Detection
- URL: http://arxiv.org/abs/2111.03320v1
- Date: Fri, 5 Nov 2021 08:43:21 GMT
- Title: On the Impact of Temporal Representations on Metaphor Detection
- Authors: Giorgio Ottolina, Matteo Palmonari, Mehwish Alam, Manuel Vimercati
- Abstract summary: State-of-the-art approaches for metaphor detection compare their literal - or core - meaning and their contextual meaning using sequential metaphor classifiers based on neural networks.
This study examines the metaphor detection task with a detailed exploratory analysis where different temporal and static word embeddings are used to account for different representations of literal meanings.
Results suggest that different word embeddings do impact on the metaphor detection task and some temporal word embeddings slightly outperform static methods on some performance measures.
- Score: 1.6959319157216468
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: State-of-the-art approaches for metaphor detection compare their literal - or
core - meaning and their contextual meaning using sequential metaphor
classifiers based on neural networks. The signal that represents the literal
meaning is often represented by (non-contextual) word embeddings. However,
metaphorical expressions evolve over time due to various reasons, such as
cultural and societal impact. Metaphorical expressions are known to co-evolve
with language and literal word meanings, and even drive, to some extent, this
evolution. This rises the question whether different, possibly time-specific,
representations of literal meanings may impact on the metaphor detection task.
To the best of our knowledge, this is the first study which examines the
metaphor detection task with a detailed exploratory analysis where different
temporal and static word embeddings are used to account for different
representations of literal meanings. Our experimental analysis is based on
three popular benchmarks used for metaphor detection and word embeddings
extracted from different corpora and temporally aligned to different
state-of-the-art approaches. The results suggest that different word embeddings
do impact on the metaphor detection task and some temporal word embeddings
slightly outperform static methods on some performance measures. However,
results also suggest that temporal word embeddings may provide representations
of words' core meaning even too close to their metaphorical meaning, thus
confusing the classifier. Overall, the interaction between temporal language
evolution and metaphor detection appears tiny in the benchmark datasets used in
our experiments. This suggests that future work for the computational analysis
of this important linguistic phenomenon should first start by creating a new
dataset where this interaction is better represented.
Related papers
- Conjuring Semantic Similarity [59.18714889874088]
The semantic similarity between two textual expressions measures the distance between their latent'meaning'
We propose a novel approach whereby the semantic similarity among textual expressions is based not on other expressions they can be rephrased as, but rather based on the imagery they evoke.
Our method contributes a novel perspective on semantic similarity that not only aligns with human-annotated scores, but also opens up new avenues for the evaluation of text-conditioned generative models.
arXiv Detail & Related papers (2024-10-21T18:51:34Z) - Meta4XNLI: A Crosslingual Parallel Corpus for Metaphor Detection and Interpretation [6.0158981171030685]
We present a novel parallel dataset for the tasks of metaphor detection and interpretation that contains metaphor annotations in both Spanish and English.
We investigate language models' metaphor identification and understanding abilities through a series of monolingual and cross-lingual experiments.
arXiv Detail & Related papers (2024-04-10T14:44:48Z) - Verifying Claims About Metaphors with Large-Scale Automatic Metaphor Identification [14.143299702954023]
This study entails a large-scale, corpus-based analysis of certain existing claims about verb metaphors, by applying metaphor detection to sentences extracted from Common Crawl.
The verification results indicate that the direct objects of verbs used as metaphors tend to have lower degrees of concreteness, imageability, and familiarity, and that metaphors are more likely to be used in emotional and subjective sentences.
arXiv Detail & Related papers (2024-04-01T10:17:45Z) - ContrastWSD: Enhancing Metaphor Detection with Word Sense Disambiguation Following the Metaphor Identification Procedure [1.03590082373586]
We present a RoBERTa-based metaphor detection model that integrates the Metaphor Identification Procedure (MIP) and Word Sense Disambiguation (WSD)
By utilizing the word senses derived from a WSD model, our model enhances the metaphor detection process and outperforms other methods.
We evaluate our approach on various benchmark datasets and compare it with strong baselines, indicating the effectiveness in advancing metaphor detection.
arXiv Detail & Related papers (2023-09-06T15:41:38Z) - Neighboring Words Affect Human Interpretation of Saliency Explanations [65.29015910991261]
Word-level saliency explanations are often used to communicate feature-attribution in text-based models.
Recent studies found that superficial factors such as word length can distort human interpretation of the communicated saliency scores.
We investigate how the marking of a word's neighboring words affect the explainee's perception of the word's importance in the context of a saliency explanation.
arXiv Detail & Related papers (2023-05-04T09:50:25Z) - Are Representations Built from the Ground Up? An Empirical Examination
of Local Composition in Language Models [91.3755431537592]
Representing compositional and non-compositional phrases is critical for language understanding.
We first formulate a problem of predicting the LM-internal representations of longer phrases given those of their constituents.
While we would expect the predictive accuracy to correlate with human judgments of semantic compositionality, we find this is largely not the case.
arXiv Detail & Related papers (2022-10-07T14:21:30Z) - What Drives the Use of Metaphorical Language? Negative Insights from
Abstractness, Affect, Discourse Coherence and Contextualized Word
Representations [13.622570558506265]
Given a specific discourse, which discourse properties trigger the use of metaphorical language, rather than using literal alternatives?
Many NLP approaches to metaphorical language rely on cognitive and (psycho-)linguistic insights and have successfully defined models of discourse coherence, abstractness and affect.
In this work, we build five simple models relying on established cognitive and linguistic properties to predict the use of a metaphorical vs. synonymous literal expression in context.
arXiv Detail & Related papers (2022-05-23T08:08:53Z) - It's not Rocket Science : Interpreting Figurative Language in Narratives [48.84507467131819]
We study the interpretation of two non-compositional figurative languages (idioms and similes)
Our experiments show that models based solely on pre-trained language models perform substantially worse than humans on these tasks.
We additionally propose knowledge-enhanced models, adopting human strategies for interpreting figurative language.
arXiv Detail & Related papers (2021-08-31T21:46:35Z) - Exploring the Representation of Word Meanings in Context: A Case Study
on Homonymy and Synonymy [0.0]
We assess the ability of both static and contextualized models to adequately represent different lexical-semantic relations.
Experiments are performed in Galician, Portuguese, English, and Spanish.
arXiv Detail & Related papers (2021-06-25T10:54:23Z) - Understanding Synonymous Referring Expressions via Contrastive Features [105.36814858748285]
We develop an end-to-end trainable framework to learn contrastive features on the image and object instance levels.
We conduct extensive experiments to evaluate the proposed algorithm on several benchmark datasets.
arXiv Detail & Related papers (2021-04-20T17:56:24Z) - Metaphoric Paraphrase Generation [58.592750281138265]
We use crowdsourcing to evaluate our results, as well as developing an automatic metric for evaluating metaphoric paraphrases.
We show that while the lexical replacement baseline is capable of producing accurate paraphrases, they often lack metaphoricity.
Our metaphor masking model excels in generating metaphoric sentences while performing nearly as well with regard to fluency and paraphrase quality.
arXiv Detail & Related papers (2020-02-28T16:30:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.