Generating similes effortlessly like a Pro: A Style Transfer Approach
for Simile Generation
- URL: http://arxiv.org/abs/2009.08942v2
- Date: Sat, 3 Oct 2020 05:47:43 GMT
- Title: Generating similes effortlessly like a Pro: A Style Transfer Approach
for Simile Generation
- Authors: Tuhin Chakrabarty, Smaranda Muresan, Nanyun Peng
- Abstract summary: Figurative language such as a simile go beyond plain expressions to give readers new insights and inspirations.
Generating a simile requires proper understanding for effective mapping of properties between two concepts.
We show how replacing literal sentences with similes from our best model in machine generated stories improves evocativeness and leads to better acceptance by human judges.
- Score: 65.22565071742528
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Literary tropes, from poetry to stories, are at the crux of human imagination
and communication. Figurative language such as a simile go beyond plain
expressions to give readers new insights and inspirations. In this paper, we
tackle the problem of simile generation. Generating a simile requires proper
understanding for effective mapping of properties between two concepts. To this
end, we first propose a method to automatically construct a parallel corpus by
transforming a large number of similes collected from Reddit to their literal
counterpart using structured common sense knowledge. We then propose to
fine-tune a pretrained sequence to sequence model, BART~\cite{lewis2019bart},
on the literal-simile pairs to gain generalizability, so that we can generate
novel similes given a literal sentence. Experiments show that our approach
generates $88\%$ novel similes that do not share properties with the training
data. Human evaluation on an independent set of literal statements shows that
our model generates similes better than two literary experts
\textit{37\%}\footnote{We average 32.6\% and 41.3\% for 2 humans.} of the
times, and three baseline systems including a recent metaphor generation model
\textit{71\%}\footnote{We average 82\% ,63\% and 68\% for three baselines.} of
the times when compared pairwise.\footnote{The simile in the title is generated
by our best model. Input: Generating similes effortlessly, output: Generating
similes \textit{like a Pro}.} We also show how replacing literal sentences with
similes from our best model in machine generated stories improves evocativeness
and leads to better acceptance by human judges.
Related papers
- Conjuring Semantic Similarity [59.18714889874088]
The semantic similarity between two textual expressions measures the distance between their latent'meaning'
We propose a novel approach whereby the semantic similarity among textual expressions is based not on other expressions they can be rephrased as, but rather based on the imagery they evoke.
Our method contributes a novel perspective on semantic similarity that not only aligns with human-annotated scores, but also opens up new avenues for the evaluation of text-conditioned generative models.
arXiv Detail & Related papers (2024-10-21T18:51:34Z) - CMDAG: A Chinese Metaphor Dataset with Annotated Grounds as CoT for
Boosting Metaphor Generation [35.14142183519002]
This paper introduces a large-scale high quality annotated Chinese Metaphor Corpus, which comprises around 28K sentences.
To ensure the accuracy and consistency of our annotations, we introduce a comprehensive set of guidelines.
Breaking tradition, our approach to metaphor generation emphasizes grounds and their distinct features rather than the conventional combination of tenors and vehicles.
arXiv Detail & Related papers (2024-02-20T17:00:41Z) - StoryAnalogy: Deriving Story-level Analogies from Large Language Models
to Unlock Analogical Understanding [72.38872974837462]
We evaluate the ability to identify and generate analogies by constructing a first-of-its-kind large-scale story-level analogy corpus.
textscStory Analogy contains 24K story pairs from diverse domains with human annotations on two similarities from the extended Structure-Mapping Theory.
We observe that the data in textscStory Analogy can improve the quality of analogy generation in large language models.
arXiv Detail & Related papers (2023-10-19T16:29:23Z) - Metaphorical Paraphrase Generation: Feeding Metaphorical Language Models
with Literal Texts [2.6397379133308214]
The proposed algorithm does not only focus on verbs but also on nouns and adjectives.
Human evaluation showed that our system-generated metaphors are considered more creative and metaphorical than human-generated ones.
arXiv Detail & Related papers (2022-10-10T15:11:27Z) - It's not Rocket Science : Interpreting Figurative Language in Narratives [48.84507467131819]
We study the interpretation of two non-compositional figurative languages (idioms and similes)
Our experiments show that models based solely on pre-trained language models perform substantially worse than humans on these tasks.
We additionally propose knowledge-enhanced models, adopting human strategies for interpreting figurative language.
arXiv Detail & Related papers (2021-08-31T21:46:35Z) - Metaphor Generation with Conceptual Mappings [58.61307123799594]
We aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs.
We propose to control the generation process by encoding conceptual mappings between cognitive domains.
We show that the unsupervised CM-Lex model is competitive with recent deep learning metaphor generation systems.
arXiv Detail & Related papers (2021-06-02T15:27:05Z) - MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding [22.756157298168127]
Based on a theoretically-grounded connection between metaphors and symbols, we propose a method to automatically construct a parallel corpus.
For the generation task, we incorporate a metaphor discriminator to guide the decoding of a sequence to sequence model fine-tuned on our parallel data.
A task-based evaluation shows that human-written poems enhanced with metaphors are preferred 68% of the time compared to poems without metaphors.
arXiv Detail & Related papers (2021-03-11T16:39:19Z) - Writing Polishment with Simile: Task, Dataset and A Neural Approach [9.38000305423665]
We propose a new task of Writing Polishment with Simile (WPS) to investigate whether machines are able to polish texts with similes as we human do.
Our model firstly locates where the simile should happen, and then generates a location-specific simile.
We also release a large-scale Chinese Simile dataset containing 5 million similes with context.
arXiv Detail & Related papers (2020-12-15T06:39:54Z) - Metaphoric Paraphrase Generation [58.592750281138265]
We use crowdsourcing to evaluate our results, as well as developing an automatic metric for evaluating metaphoric paraphrases.
We show that while the lexical replacement baseline is capable of producing accurate paraphrases, they often lack metaphoricity.
Our metaphor masking model excels in generating metaphoric sentences while performing nearly as well with regard to fluency and paraphrase quality.
arXiv Detail & Related papers (2020-02-28T16:30:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.