Towards Conversational Humor Analysis and Design
- URL: http://arxiv.org/abs/2103.00536v1
- Date: Sun, 28 Feb 2021 15:22:57 GMT
- Title: Towards Conversational Humor Analysis and Design
- Authors: Tanishq Chaudhary, Mayank Goel, Radhika Mamidi
- Abstract summary: This paper is based around two core concepts: Classification and the Generation of a punchline from a particular setup based on the Incongruity Theory.
For humor generation, we use a neural model, and then merge the classical rule-based approaches with the neural approach to create a hybrid model.
We then use and compare our model with human written jokes with the help of human evaluators in a double-blind study.
- Score: 17.43766386622031
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Well-defined jokes can be divided neatly into a setup and a punchline. While
most works on humor today talk about a joke as a whole, the idea of generating
punchlines to a setup has applications in conversational humor, where funny
remarks usually occur with a non-funny context. Thus, this paper is based
around two core concepts: Classification and the Generation of a punchline from
a particular setup based on the Incongruity Theory. We first implement a
feature-based machine learning model to classify humor. For humor generation,
we use a neural model, and then merge the classical rule-based approaches with
the neural approach to create a hybrid model. The idea behind being: combining
insights gained from other tasks with the setup-punchline model and thus
applying it to existing text generation approaches. We then use and compare our
model with human written jokes with the help of human evaluators in a
double-blind study.
Related papers
- Witscript: A System for Generating Improvised Jokes in a Conversation [0.0]
Witscript is a novel joke generation system that can improvise original, contextually relevant jokes.
Human evaluators judged Witscript's responses to input sentences to be jokes more than 40% of the time.
arXiv Detail & Related papers (2023-02-03T21:30:34Z) - The Naughtyformer: A Transformer Understands Offensive Humor [63.05016513788047]
We introduce a novel jokes dataset filtered from Reddit and solve the subtype classification task using a finetuned Transformer dubbed the Naughtyformer.
We show that our model is significantly better at detecting offensiveness in jokes compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-11-25T20:37:58Z) - ExPUNations: Augmenting Puns with Keywords and Explanations [88.58174386894913]
We augment an existing dataset of puns with detailed crowdsourced annotations of keywords.
This is the first humor dataset with such extensive and fine-grained annotations specifically for puns.
We propose two tasks: explanation generation to aid with pun classification and keyword-conditioned pun generation.
arXiv Detail & Related papers (2022-10-24T18:12:02Z) - Robust Preference Learning for Storytelling via Contrastive
Reinforcement Learning [53.92465205531759]
Controlled automated story generation seeks to generate natural language stories satisfying constraints from natural language critiques or preferences.
We train a contrastive bi-encoder model to align stories with human critiques, building a general purpose preference model.
We further fine-tune the contrastive reward model using a prompt-learning technique to increase story generation robustness.
arXiv Detail & Related papers (2022-10-14T13:21:33Z) - Towards Multimodal Prediction of Spontaneous Humour: A Novel Dataset and First Results [84.37263300062597]
Humor is a substantial element of human social behavior, affect, and cognition.
Current methods of humor detection have been exclusively based on staged data, making them inadequate for "real-world" applications.
We contribute to addressing this deficiency by introducing the novel Passau-Spontaneous Football Coach Humor dataset, comprising about 11 hours of recordings.
arXiv Detail & Related papers (2022-09-28T17:36:47Z) - Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks
from The New Yorker Caption Contest [70.40189243067857]
Large neural networks can now generate jokes, but do they really "understand" humor?
We challenge AI models with three tasks derived from the New Yorker Cartoon Caption Contest.
We find that both types of models struggle at all three tasks.
arXiv Detail & Related papers (2022-09-13T20:54:00Z) - Uncertainty and Surprisal Jointly Deliver the Punchline: Exploiting
Incongruity-Based Features for Humor Recognition [0.6445605125467573]
We break down any joke into two distinct components: the set-up and the punchline.
Inspired by the incongruity theory of humor, we model the set-up as the part developing semantic uncertainty.
With increasingly powerful language models, we were able to feed the set-up along with the punchline into the GPT-2 language model.
arXiv Detail & Related papers (2020-12-22T13:48:09Z) - Dutch Humor Detection by Generating Negative Examples [5.888646114353371]
Humor detection is usually modeled as a binary classification task, trained to predict if the given text is a joke or another type of text.
We propose using text generation algorithms for imitating the original joke dataset to increase the difficulty for the learning algorithm.
We compare the humor detection capabilities of classic neural network approaches with the state-of-the-art Dutch language model RobBERT.
arXiv Detail & Related papers (2020-10-26T15:15:10Z) - Let's be Humorous: Knowledge Enhanced Humor Generation [26.886255899651893]
We explore how to generate a punchline given the set-up with the relevant knowledge.
To our knowledge, this is the first attempt to generate punchlines with knowledge enhanced model.
The experimental results demonstrate that our method can make use of knowledge to generate fluent, funny punchlines.
arXiv Detail & Related papers (2020-04-28T06:06:18Z) - ColBERT: Using BERT Sentence Embedding in Parallel Neural Networks for
Computational Humor [0.0]
We propose a novel approach for detecting and rating humor in short texts based on a popular linguistic theory of humor.
The proposed technical method initiates by separating sentences of the given text and utilizing the BERT model to generate embeddings for each one.
We accompany the paper with a novel dataset for humor detection consisting of 200,000 formal short texts.
The proposed model obtained F1 scores of 0.982 and 0.869 in the humor detection experiments which outperform general and state-of-the-art models.
arXiv Detail & Related papers (2020-04-27T13:10:11Z) - I love your chain mail! Making knights smile in a fantasy game world:
Open-domain goal-oriented dialogue agents [69.68400056148336]
We train a goal-oriented model with reinforcement learning against an imitation-learned chit-chat'' model.
We show that both models outperform an inverse model baseline and can converse naturally with their dialogue partner in order to achieve goals.
arXiv Detail & Related papers (2020-02-07T16:22:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.