Prompt to GPT-3: Step-by-Step Thinking Instructions for Humor Generation
- URL: http://arxiv.org/abs/2306.13195v1
- Date: Thu, 22 Jun 2023 20:38:52 GMT
- Title: Prompt to GPT-3: Step-by-Step Thinking Instructions for Humor Generation
- Authors: Yuetian Chen, Bowen Shi and Mei Si
- Abstract summary: This paper explores humor generation using GPT-3 by modeling human comedy writing theory and leveraging step-by-step thinking instructions.
In addition, we explore the role of cognitive distance in creating humor.
- Score: 6.612883925152328
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Artificial intelligence has made significant progress in natural language
processing, with models like GPT-3 demonstrating impressive capabilities.
However, these models still have limitations when it comes to complex tasks
that require an understanding of the user, such as mastering human comedy
writing strategies. This paper explores humor generation using GPT-3 by
modeling human comedy writing theory and leveraging step-by-step thinking
instructions. In addition, we explore the role of cognitive distance in
creating humor.
Related papers
- Innovative Thinking, Infinite Humor: Humor Research of Large Language Models through Structured Thought Leaps [34.35304020094762]
Humor is a culturally nuanced aspect of human language that presents challenges for understanding and generation.
In this paper, we propose a systematic way of thinking about generating humor and based on it, we built Creative Leap of Structured Thought frame.
arXiv Detail & Related papers (2024-10-14T10:50:16Z) - Can Pre-trained Language Models Understand Chinese Humor? [74.96509580592004]
This paper is the first work that systematically investigates the humor understanding ability of pre-trained language models (PLMs)
We construct a comprehensive Chinese humor dataset, which can fully meet all the data requirements of the proposed evaluation framework.
Our empirical study on the Chinese humor dataset yields some valuable observations, which are of great guiding value for future optimization of PLMs in humor understanding and generation.
arXiv Detail & Related papers (2024-07-04T18:13:38Z) - Humor Mechanics: Advancing Humor Generation with Multistep Reasoning [11.525355831490828]
We develop a working prototype for humor generation using multi-step reasoning.
We compare our approach with human-created jokes, zero-shot GPT-4 generated humor, and other baselines.
Our findings demonstrate that the multi-step reasoning approach consistently improves the quality of generated humor.
arXiv Detail & Related papers (2024-05-12T13:00:14Z) - Genetic Auto-prompt Learning for Pre-trained Code Intelligence Language Models [54.58108387797138]
We investigate the effectiveness of prompt learning in code intelligence tasks.
Existing automatic prompt design methods are very limited to code intelligence tasks.
We propose Genetic Auto Prompt (GenAP) which utilizes an elaborate genetic algorithm to automatically design prompts.
arXiv Detail & Related papers (2024-03-20T13:37:00Z) - Towards Multimodal Prediction of Spontaneous Humour: A Novel Dataset and First Results [84.37263300062597]
Humor is a substantial element of human social behavior, affect, and cognition.
Current methods of humor detection have been exclusively based on staged data, making them inadequate for "real-world" applications.
We contribute to addressing this deficiency by introducing the novel Passau-Spontaneous Football Coach Humor dataset, comprising about 11 hours of recordings.
arXiv Detail & Related papers (2022-09-28T17:36:47Z) - Using cognitive psychology to understand GPT-3 [0.0]
We study GPT-3, a recent large language model, using tools from cognitive psychology.
We assess GPT-3's decision-making, information search, deliberation, and causal reasoning abilities.
arXiv Detail & Related papers (2022-06-21T20:06:03Z) - CoAuthor: Designing a Human-AI Collaborative Writing Dataset for
Exploring Language Model Capabilities [92.79451009324268]
We present CoAuthor, a dataset designed for revealing GPT-3's capabilities in assisting creative and argumentative writing.
We demonstrate that CoAuthor can address questions about GPT-3's language, ideation, and collaboration capabilities.
We discuss how this work may facilitate a more principled discussion around LMs' promises and pitfalls in relation to interaction design.
arXiv Detail & Related papers (2022-01-18T07:51:57Z) - Reframing Instructional Prompts to GPTk's Language [72.69833640335519]
We propose reframing techniques for model designers to create effective prompts for language models.
Our results show that reframing improves few-shot learning performance by 14% while reducing sample complexity.
The performance gains are particularly important on large language models, such as GPT3 where tuning models or prompts on large datasets is not feasible.
arXiv Detail & Related papers (2021-09-16T09:44:43Z) - Advancing Humor-Focused Sentiment Analysis through Improved
Contextualized Embeddings and Model Architecture [0.0]
Humor allows us to express thoughts and feelings conveniently and effectively.
As language models become ubiquitous through virtual-assistants and IOT devices, the need to develop humor-aware models rises exponentially.
arXiv Detail & Related papers (2020-11-23T22:30:32Z) - Language Models are Few-Shot Learners [61.36677350504291]
We show that scaling up language models greatly improves task-agnostic, few-shot performance.
We train GPT-3, an autoregressive language model with 175 billion parameters, and test its performance in the few-shot setting.
GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks.
arXiv Detail & Related papers (2020-05-28T17:29:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.