MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding
- URL: http://arxiv.org/abs/2103.06779v1
- Date: Thu, 11 Mar 2021 16:39:19 GMT
- Title: MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding
- Authors: Tuhin Chakrabarty, Xurui Zhang, Smaranda Muresan, Nanyun Peng
- Abstract summary: Based on a theoretically-grounded connection between metaphors and symbols, we propose a method to automatically construct a parallel corpus.
For the generation task, we incorporate a metaphor discriminator to guide the decoding of a sequence to sequence model fine-tuned on our parallel data.
A task-based evaluation shows that human-written poems enhanced with metaphors are preferred 68% of the time compared to poems without metaphors.
- Score: 22.756157298168127
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generating metaphors is a challenging task as it requires a proper
understanding of abstract concepts, making connections between unrelated
concepts, and deviating from the literal meaning. Based on a
theoretically-grounded connection between metaphors and symbols, we propose a
method to automatically construct a parallel corpus by transforming a large
number of metaphorical sentences from the Gutenberg Poetry corpus (Jacobs,
2018) to their literal counterpart using recent advances in masked language
modeling coupled with commonsense inference. For the generation task, we
incorporate a metaphor discriminator to guide the decoding of a sequence to
sequence model fine-tuned on our parallel data to generate high-quality
metaphors. Human evaluation on an independent test set of literal statements
shows that our best model generates metaphors better than three well-crafted
baselines 66% of the time on average. A task-based evaluation shows that
human-written poems enhanced with metaphors proposed by our model are preferred
68% of the time compared to poems without metaphors.
Related papers
- Conjuring Semantic Similarity [59.18714889874088]
The semantic similarity between two textual expressions measures the distance between their latent'meaning'
We propose a novel approach whereby the semantic similarity among textual expressions is based not on other expressions they can be rephrased as, but rather based on the imagery they evoke.
Our method contributes a novel perspective on semantic similarity that not only aligns with human-annotated scores, but also opens up new avenues for the evaluation of text-conditioned generative models.
arXiv Detail & Related papers (2024-10-21T18:51:34Z) - Compositional Entailment Learning for Hyperbolic Vision-Language Models [54.41927525264365]
We show how to fully leverage the innate hierarchical nature of hyperbolic embeddings by looking beyond individual image-text pairs.
We propose Compositional Entailment Learning for hyperbolic vision-language models.
Empirical evaluation on a hyperbolic vision-language model trained with millions of image-text pairs shows that the proposed compositional learning approach outperforms conventional Euclidean CLIP learning.
arXiv Detail & Related papers (2024-10-09T14:12:50Z) - CMDAG: A Chinese Metaphor Dataset with Annotated Grounds as CoT for
Boosting Metaphor Generation [35.14142183519002]
This paper introduces a large-scale high quality annotated Chinese Metaphor Corpus, which comprises around 28K sentences.
To ensure the accuracy and consistency of our annotations, we introduce a comprehensive set of guidelines.
Breaking tradition, our approach to metaphor generation emphasizes grounds and their distinct features rather than the conventional combination of tenors and vehicles.
arXiv Detail & Related papers (2024-02-20T17:00:41Z) - Metaphorical Polysemy Detection: Conventional Metaphor meets Word Sense
Disambiguation [9.860944032009847]
Linguists distinguish between novel and conventional metaphor, a distinction which the metaphor detection task in NLP does not take into account.
In this paper, we investigate the limitations of treating conventional metaphors in this way.
We develop the first MPD model, which learns to identify conventional metaphors in the English WordNet.
arXiv Detail & Related papers (2022-12-16T10:39:22Z) - Metaphorical Paraphrase Generation: Feeding Metaphorical Language Models
with Literal Texts [2.6397379133308214]
The proposed algorithm does not only focus on verbs but also on nouns and adjectives.
Human evaluation showed that our system-generated metaphors are considered more creative and metaphorical than human-generated ones.
arXiv Detail & Related papers (2022-10-10T15:11:27Z) - It's not Rocket Science : Interpreting Figurative Language in Narratives [48.84507467131819]
We study the interpretation of two non-compositional figurative languages (idioms and similes)
Our experiments show that models based solely on pre-trained language models perform substantially worse than humans on these tasks.
We additionally propose knowledge-enhanced models, adopting human strategies for interpreting figurative language.
arXiv Detail & Related papers (2021-08-31T21:46:35Z) - Metaphor Generation with Conceptual Mappings [58.61307123799594]
We aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs.
We propose to control the generation process by encoding conceptual mappings between cognitive domains.
We show that the unsupervised CM-Lex model is competitive with recent deep learning metaphor generation systems.
arXiv Detail & Related papers (2021-06-02T15:27:05Z) - Generating similes effortlessly like a Pro: A Style Transfer Approach
for Simile Generation [65.22565071742528]
Figurative language such as a simile go beyond plain expressions to give readers new insights and inspirations.
Generating a simile requires proper understanding for effective mapping of properties between two concepts.
We show how replacing literal sentences with similes from our best model in machine generated stories improves evocativeness and leads to better acceptance by human judges.
arXiv Detail & Related papers (2020-09-18T17:37:13Z) - Metaphoric Paraphrase Generation [58.592750281138265]
We use crowdsourcing to evaluate our results, as well as developing an automatic metric for evaluating metaphoric paraphrases.
We show that while the lexical replacement baseline is capable of producing accurate paraphrases, they often lack metaphoricity.
Our metaphor masking model excels in generating metaphoric sentences while performing nearly as well with regard to fluency and paraphrase quality.
arXiv Detail & Related papers (2020-02-28T16:30:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.