Humor Mechanics: Advancing Humor Generation with Multistep Reasoning
- URL: http://arxiv.org/abs/2405.07280v1
- Date: Sun, 12 May 2024 13:00:14 GMT
- Title: Humor Mechanics: Advancing Humor Generation with Multistep Reasoning
- Authors: Alexey Tikhonov, Pavel Shtykovskiy,
- Abstract summary: We develop a working prototype for humor generation using multi-step reasoning.
We compare our approach with human-created jokes, zero-shot GPT-4 generated humor, and other baselines.
Our findings demonstrate that the multi-step reasoning approach consistently improves the quality of generated humor.
- Score: 11.525355831490828
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we explore the generation of one-liner jokes through multi-step reasoning. Our work involved reconstructing the process behind creating humorous one-liners and developing a working prototype for humor generation. We conducted comprehensive experiments with human participants to evaluate our approach, comparing it with human-created jokes, zero-shot GPT-4 generated humor, and other baselines. The evaluation focused on the quality of humor produced, using human labeling as a benchmark. Our findings demonstrate that the multi-step reasoning approach consistently improves the quality of generated humor. We present the results and share the datasets used in our experiments, offering insights into enhancing humor generation with artificial intelligence.
Related papers
- PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Can Pre-trained Language Models Understand Chinese Humor? [74.96509580592004]
This paper is the first work that systematically investigates the humor understanding ability of pre-trained language models (PLMs)
We construct a comprehensive Chinese humor dataset, which can fully meet all the data requirements of the proposed evaluation framework.
Our empirical study on the Chinese humor dataset yields some valuable observations, which are of great guiding value for future optimization of PLMs in humor understanding and generation.
arXiv Detail & Related papers (2024-07-04T18:13:38Z) - Getting Serious about Humor: Crafting Humor Datasets with Unfunny Large Language Models [27.936545041302377]
Large language models (LLMs) can generate synthetic data for humor detection via editing texts.
We benchmark LLMs on an existing human dataset and show that current LLMs display an impressive ability to 'unfun' jokes.
We extend our approach to a code-mixed English-Hindi humor dataset, where we find that GPT-4's synthetic data is highly rated by bilingual annotators.
arXiv Detail & Related papers (2024-02-23T02:58:12Z) - The Naughtyformer: A Transformer Understands Offensive Humor [63.05016513788047]
We introduce a novel jokes dataset filtered from Reddit and solve the subtype classification task using a finetuned Transformer dubbed the Naughtyformer.
We show that our model is significantly better at detecting offensiveness in jokes compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-11-25T20:37:58Z) - This joke is [MASK]: Recognizing Humor and Offense with Prompting [9.745213455946324]
Humor is a magnetic component in everyday human interactions and communications.
We investigate the effectiveness of prompting, a new transfer learning paradigm for NLP, for humor recognition.
arXiv Detail & Related papers (2022-10-25T13:02:45Z) - ExPUNations: Augmenting Puns with Keywords and Explanations [88.58174386894913]
We augment an existing dataset of puns with detailed crowdsourced annotations of keywords.
This is the first humor dataset with such extensive and fine-grained annotations specifically for puns.
We propose two tasks: explanation generation to aid with pun classification and keyword-conditioned pun generation.
arXiv Detail & Related papers (2022-10-24T18:12:02Z) - Towards Multimodal Prediction of Spontaneous Humour: A Novel Dataset and First Results [84.37263300062597]
Humor is a substantial element of human social behavior, affect, and cognition.
Current methods of humor detection have been exclusively based on staged data, making them inadequate for "real-world" applications.
We contribute to addressing this deficiency by introducing the novel Passau-Spontaneous Football Coach Humor dataset, comprising about 11 hours of recordings.
arXiv Detail & Related papers (2022-09-28T17:36:47Z) - "The Boating Store Had Its Best Sail Ever": Pronunciation-attentive
Contextualized Pun Recognition [80.59427655743092]
We propose Pronunciation-attentive Contextualized Pun Recognition (PCPR) to perceive human humor.
PCPR derives contextualized representation for each word in a sentence by capturing the association between the surrounding context and its corresponding phonetic symbols.
Results demonstrate that the proposed approach significantly outperforms the state-of-the-art methods in pun detection and location tasks.
arXiv Detail & Related papers (2020-04-29T20:12:20Z) - Let's be Humorous: Knowledge Enhanced Humor Generation [26.886255899651893]
We explore how to generate a punchline given the set-up with the relevant knowledge.
To our knowledge, this is the first attempt to generate punchlines with knowledge enhanced model.
The experimental results demonstrate that our method can make use of knowledge to generate fluent, funny punchlines.
arXiv Detail & Related papers (2020-04-28T06:06:18Z) - Stimulating Creativity with FunLines: A Case Study of Humor Generation
in Headlines [9.367224590861913]
We introduce FunLines, a competitive game where players edit news headlines to make them funny.
FunLines makes the humor generation process fun, interactive, collaborative, rewarding and educational.
We show the effectiveness of this data by training humor classification models that outperform a previous benchmark.
arXiv Detail & Related papers (2020-02-05T22:56:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.