Let's be Humorous: Knowledge Enhanced Humor Generation
- URL: http://arxiv.org/abs/2004.13317v2
- Date: Sat, 4 Jul 2020 03:04:14 GMT
- Title: Let's be Humorous: Knowledge Enhanced Humor Generation
- Authors: Hang Zhang, Dayiheng Liu, Jiancheng Lv, Cheng Luo
- Abstract summary: We explore how to generate a punchline given the set-up with the relevant knowledge.
To our knowledge, this is the first attempt to generate punchlines with knowledge enhanced model.
The experimental results demonstrate that our method can make use of knowledge to generate fluent, funny punchlines.
- Score: 26.886255899651893
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The generation of humor is an under-explored and challenging problem.
Previous works mainly utilize templates or replace phrases to generate humor.
However, few works focus on freer forms and the background knowledge of humor.
The linguistic theory of humor defines the structure of a humor sentence as
set-up and punchline. In this paper, we explore how to generate a punchline
given the set-up with the relevant knowledge. We propose a framework that can
fuse the knowledge to end-to-end models. To our knowledge, this is the first
attempt to generate punchlines with knowledge enhanced model. Furthermore, we
create the first humor-knowledge dataset. The experimental results demonstrate
that our method can make use of knowledge to generate fluent, funny punchlines,
which outperforms several baselines.
Related papers
- Can Pre-trained Language Models Understand Chinese Humor? [74.96509580592004]
This paper is the first work that systematically investigates the humor understanding ability of pre-trained language models (PLMs)
We construct a comprehensive Chinese humor dataset, which can fully meet all the data requirements of the proposed evaluation framework.
Our empirical study on the Chinese humor dataset yields some valuable observations, which are of great guiding value for future optimization of PLMs in humor understanding and generation.
arXiv Detail & Related papers (2024-07-04T18:13:38Z) - The Naughtyformer: A Transformer Understands Offensive Humor [63.05016513788047]
We introduce a novel jokes dataset filtered from Reddit and solve the subtype classification task using a finetuned Transformer dubbed the Naughtyformer.
We show that our model is significantly better at detecting offensiveness in jokes compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-11-25T20:37:58Z) - ExPUNations: Augmenting Puns with Keywords and Explanations [88.58174386894913]
We augment an existing dataset of puns with detailed crowdsourced annotations of keywords.
This is the first humor dataset with such extensive and fine-grained annotations specifically for puns.
We propose two tasks: explanation generation to aid with pun classification and keyword-conditioned pun generation.
arXiv Detail & Related papers (2022-10-24T18:12:02Z) - Towards Multimodal Prediction of Spontaneous Humour: A Novel Dataset and First Results [84.37263300062597]
Humor is a substantial element of human social behavior, affect, and cognition.
Current methods of humor detection have been exclusively based on staged data, making them inadequate for "real-world" applications.
We contribute to addressing this deficiency by introducing the novel Passau-Spontaneous Football Coach Humor dataset, comprising about 11 hours of recordings.
arXiv Detail & Related papers (2022-09-28T17:36:47Z) - DeHumor: Visual Analytics for Decomposing Humor [36.300283476950796]
We develop DeHumor, a visual system for analyzing humorous behaviors in public speaking.
To intuitively reveal the building blocks of each concrete example, DeHumor decomposes each humorous video into multimodal features.
We show that DeHumor is able to highlight various building blocks of humor examples.
arXiv Detail & Related papers (2021-07-18T04:01:07Z) - Towards Conversational Humor Analysis and Design [17.43766386622031]
This paper is based around two core concepts: Classification and the Generation of a punchline from a particular setup based on the Incongruity Theory.
For humor generation, we use a neural model, and then merge the classical rule-based approaches with the neural approach to create a hybrid model.
We then use and compare our model with human written jokes with the help of human evaluators in a double-blind study.
arXiv Detail & Related papers (2021-02-28T15:22:57Z) - Uncertainty and Surprisal Jointly Deliver the Punchline: Exploiting
Incongruity-Based Features for Humor Recognition [0.6445605125467573]
We break down any joke into two distinct components: the set-up and the punchline.
Inspired by the incongruity theory of humor, we model the set-up as the part developing semantic uncertainty.
With increasingly powerful language models, we were able to feed the set-up along with the punchline into the GPT-2 language model.
arXiv Detail & Related papers (2020-12-22T13:48:09Z) - Federated Learning with Diversified Preference for Humor Recognition [40.89453484353102]
We propose the FedHumor approach to recognize humorous text contents in a personalized manner through federated learning (FL)
Experiments demonstrate significant advantages of FedHumor in recognizing humor contents accurately for people with diverse humor preferences compared to 9 state-of-the-art humor recognition approaches.
arXiv Detail & Related papers (2020-12-03T03:24:24Z) - Dutch Humor Detection by Generating Negative Examples [5.888646114353371]
Humor detection is usually modeled as a binary classification task, trained to predict if the given text is a joke or another type of text.
We propose using text generation algorithms for imitating the original joke dataset to increase the difficulty for the learning algorithm.
We compare the humor detection capabilities of classic neural network approaches with the state-of-the-art Dutch language model RobBERT.
arXiv Detail & Related papers (2020-10-26T15:15:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.