AmbiPun: Generating Humorous Puns with Ambiguous Context
- URL: http://arxiv.org/abs/2205.01825v1
- Date: Wed, 4 May 2022 00:24:11 GMT
- Title: AmbiPun: Generating Humorous Puns with Ambiguous Context
- Authors: Anirudh Mittal, Yufei Tian, Nanyun Peng
- Abstract summary: Our model first produces a list of related concepts through a reverse dictionary.
We then utilize one-shot GPT3 to generate context words and then generate puns incorporating context words from both concepts.
Human evaluation shows that our method successfully generates pun 52% of the time.
- Score: 31.81213062995652
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a simple yet effective way to generate pun
sentences that does not require any training on existing puns. Our approach is
inspired by humor theories that ambiguity comes from the context rather than
the pun word itself. Given a pair of definitions of a pun word, our model first
produces a list of related concepts through a reverse dictionary. We then
utilize one-shot GPT3 to generate context words and then generate puns
incorporating context words from both concepts. Human evaluation shows that our
method successfully generates pun 52\% of the time, outperforming well-crafted
baselines and the state-of-the-art models by a large margin.
Related papers
- Context-Situated Pun Generation [42.727010784168115]
We propose a new task, context-situated pun generation, where a specific context represented by a set of keywords is provided.
The task is to first identify suitable pun words that are appropriate for the context, then generate puns based on the context keywords and the identified pun words.
We show that 69% of our top retrieved pun words can be used to generate context-situated puns, and our generation module yields successful 31% of the time.
arXiv Detail & Related papers (2022-10-24T18:24:48Z) - ExPUNations: Augmenting Puns with Keywords and Explanations [88.58174386894913]
We augment an existing dataset of puns with detailed crowdsourced annotations of keywords.
This is the first humor dataset with such extensive and fine-grained annotations specifically for puns.
We propose two tasks: explanation generation to aid with pun classification and keyword-conditioned pun generation.
arXiv Detail & Related papers (2022-10-24T18:12:02Z) - A Unified Framework for Pun Generation with Humor Principles [31.70470387786539]
We propose a unified framework to generate both homophonic and homographic puns.
We incorporate three linguistic attributes of puns to the language models: ambiguity, distinctiveness, and surprise.
arXiv Detail & Related papers (2022-10-24T09:20:45Z) - Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks
from The New Yorker Caption Contest [70.40189243067857]
Large neural networks can now generate jokes, but do they really "understand" humor?
We challenge AI models with three tasks derived from the New Yorker Cartoon Caption Contest.
We find that both types of models struggle at all three tasks.
arXiv Detail & Related papers (2022-09-13T20:54:00Z) - A Dual-Attention Neural Network for Pun Location and Using Pun-Gloss
Pairs for Interpretation [25.2990606699585]
Pun location is to identify the punning word in a text.
Pun interpretation is to find out two different meanings of the punning word.
arXiv Detail & Related papers (2021-10-14T08:15:04Z) - "The Boating Store Had Its Best Sail Ever": Pronunciation-attentive
Contextualized Pun Recognition [80.59427655743092]
We propose Pronunciation-attentive Contextualized Pun Recognition (PCPR) to perceive human humor.
PCPR derives contextualized representation for each word in a sentence by capturing the association between the surrounding context and its corresponding phonetic symbols.
Results demonstrate that the proposed approach significantly outperforms the state-of-the-art methods in pun detection and location tasks.
arXiv Detail & Related papers (2020-04-29T20:12:20Z) - Lexical Sememe Prediction using Dictionary Definitions by Capturing
Local Semantic Correspondence [94.79912471702782]
Sememes, defined as the minimum semantic units of human languages, have been proven useful in many NLP tasks.
We propose a Sememe Correspondence Pooling (SCorP) model, which is able to capture this kind of matching to predict sememes.
We evaluate our model and baseline methods on a famous sememe KB HowNet and find that our model achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-01-16T17:30:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.