A Match Made in Heaven: A Multi-task Framework for Hyperbole and
Metaphor Detection
- URL: http://arxiv.org/abs/2305.17480v2
- Date: Tue, 30 May 2023 13:35:35 GMT
- Title: A Match Made in Heaven: A Multi-task Framework for Hyperbole and
Metaphor Detection
- Authors: Naveen Badathala, Abisek Rajakumar Kalarani, Tejpalsingh Siledar,
Pushpak Bhattacharyya
- Abstract summary: Hyperbole and metaphor are common in day-to-day communication.
Existing approaches to automatically detect metaphor and hyperbole have studied these language phenomena independently.
We propose a multi-task deep learning framework to detect hyperbole and metaphor simultaneously.
- Score: 27.85834441076481
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hyperbole and metaphor are common in day-to-day communication (e.g., "I am in
deep trouble": how does trouble have depth?), which makes their detection
important, especially in a conversational AI setting. Existing approaches to
automatically detect metaphor and hyperbole have studied these language
phenomena independently, but their relationship has hardly, if ever, been
explored computationally. In this paper, we propose a multi-task deep learning
framework to detect hyperbole and metaphor simultaneously. We hypothesize that
metaphors help in hyperbole detection, and vice-versa. To test this hypothesis,
we annotate two hyperbole datasets- HYPO and HYPO-L- with metaphor labels.
Simultaneously, we annotate two metaphor datasets- TroFi and LCC- with
hyperbole labels. Experiments using these datasets give an improvement of the
state of the art of hyperbole detection by 12%. Additionally, our multi-task
learning (MTL) approach shows an improvement of up to 17% over single-task
learning (STL) for both hyperbole and metaphor detection, supporting our
hypothesis. To the best of our knowledge, ours is the first demonstration of
computational leveraging of linguistic intimacy between metaphor and hyperbole,
leading to showing the superiority of MTL over STL for hyperbole and metaphor
detection.
Related papers
- NYK-MS: A Well-annotated Multi-modal Metaphor and Sarcasm Understanding Benchmark on Cartoon-Caption Dataset [11.453576424853749]
We create a new benchmark named NYK-MS, which contains 1,583 samples for metaphor understanding tasks.
Tasks include whether it contains metaphor/sarcasm, which word or object contains metaphor/sarcasm, what does it satirize and why.
All of the 7 tasks are well-annotated by at least 3 annotators.
arXiv Detail & Related papers (2024-09-02T08:14:49Z) - Metaphor Understanding Challenge Dataset for LLMs [12.444344984005236]
We release the Metaphor Understanding Challenge dataset (MUNCH)
MUNCH is designed to evaluate the metaphor understanding capabilities of large language models (LLMs)
The dataset provides over 10k paraphrases for sentences containing metaphor use, as well as 1.5k instances containing inapt paraphrases.
arXiv Detail & Related papers (2024-03-18T14:08:59Z) - That was the last straw, we need more: Are Translation Systems Sensitive
to Disambiguating Context? [64.38544995251642]
We study semantic ambiguities that exist in the source (English in this work) itself.
We focus on idioms that are open to both literal and figurative interpretations.
We find that current MT models consistently translate English idioms literally, even when the context suggests a figurative interpretation.
arXiv Detail & Related papers (2023-10-23T06:38:49Z) - Hyperbolic vs Euclidean Embeddings in Few-Shot Learning: Two Sides of
the Same Coin [49.12496652756007]
We show that the best few-shot results are attained for hyperbolic embeddings at a common hyperbolic radius.
In contrast to prior benchmark results, we demonstrate that better performance can be achieved by a fixed-radius encoder equipped with the Euclidean metric.
arXiv Detail & Related papers (2023-09-18T14:51:46Z) - Image Matters: A New Dataset and Empirical Study for Multimodal
Hyperbole Detection [52.04083398850383]
We create a multimodal detection dataset from Weibo (a Chinese social media)
We treat the text and image from a piece of weibo as two modalities and explore the role of text and image for hyperbole detection.
Different pre-trained multimodal encoders are also evaluated on this downstream task to show their performance.
arXiv Detail & Related papers (2023-07-01T03:23:56Z) - MOVER: Mask, Over-generate and Rank for Hyperbole Generation [82.63394952538292]
We introduce a new task of hyperbole generation to transfer a literal sentence into its hyperbolic paraphrase.
We construct HYPO-XL, the first large-scale hyperbole corpus containing 17,862 hyperbolic sentences in a non-trivial way.
Based on our corpus, we propose an unsupervised method for hyperbole generation with no need for parallel literal-hyperbole pairs.
arXiv Detail & Related papers (2021-09-16T05:25:13Z) - HypoGen: Hyperbole Generation with Commonsense and Counterfactual
Knowledge [11.93269712166532]
A hyperbole is an intentional and creative exaggeration not to be taken literally.
We tackle the under-explored and challenging task of sentence-level hyperbole generation.
Our generation method is able to generate hyperboles creatively with high success rate and intensity scores.
arXiv Detail & Related papers (2021-09-10T20:19:52Z) - Metaphor Generation with Conceptual Mappings [58.61307123799594]
We aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs.
We propose to control the generation process by encoding conceptual mappings between cognitive domains.
We show that the unsupervised CM-Lex model is competitive with recent deep learning metaphor generation systems.
arXiv Detail & Related papers (2021-06-02T15:27:05Z) - MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding [22.756157298168127]
Based on a theoretically-grounded connection between metaphors and symbols, we propose a method to automatically construct a parallel corpus.
For the generation task, we incorporate a metaphor discriminator to guide the decoding of a sequence to sequence model fine-tuned on our parallel data.
A task-based evaluation shows that human-written poems enhanced with metaphors are preferred 68% of the time compared to poems without metaphors.
arXiv Detail & Related papers (2021-03-11T16:39:19Z) - Metaphoric Paraphrase Generation [58.592750281138265]
We use crowdsourcing to evaluate our results, as well as developing an automatic metric for evaluating metaphoric paraphrases.
We show that while the lexical replacement baseline is capable of producing accurate paraphrases, they often lack metaphoricity.
Our metaphor masking model excels in generating metaphoric sentences while performing nearly as well with regard to fluency and paraphrase quality.
arXiv Detail & Related papers (2020-02-28T16:30:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.