Stimulating Creativity with FunLines: A Case Study of Humor Generation
in Headlines
- URL: http://arxiv.org/abs/2002.02031v1
- Date: Wed, 5 Feb 2020 22:56:11 GMT
- Title: Stimulating Creativity with FunLines: A Case Study of Humor Generation
in Headlines
- Authors: Nabil Hossain, John Krumm, Tanvir Sajed and Henry Kautz
- Abstract summary: We introduce FunLines, a competitive game where players edit news headlines to make them funny.
FunLines makes the humor generation process fun, interactive, collaborative, rewarding and educational.
We show the effectiveness of this data by training humor classification models that outperform a previous benchmark.
- Score: 9.367224590861913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building datasets of creative text, such as humor, is quite challenging. We
introduce FunLines, a competitive game where players edit news headlines to
make them funny, and where they rate the funniness of headlines edited by
others. FunLines makes the humor generation process fun, interactive,
collaborative, rewarding and educational, keeping players engaged and providing
humor data at a very low cost compared to traditional crowdsourcing approaches.
FunLines offers useful performance feedback, assisting players in getting
better over time at generating and assessing humor, as our analysis shows. This
helps to further increase the quality of the generated dataset. We show the
effectiveness of this data by training humor classification models that
outperform a previous benchmark, and we release this dataset to the public.
Related papers
- Generating Visual Stories with Grounded and Coreferent Characters [63.07511918366848]
We present the first model capable of predicting visual stories with consistently grounded and coreferent character mentions.
Our model is finetuned on a new dataset which we build on top of the widely used VIST benchmark.
We also propose new evaluation metrics to measure the richness of characters and coreference in stories.
arXiv Detail & Related papers (2024-09-20T14:56:33Z) - MatchTime: Towards Automatic Soccer Game Commentary Generation [52.431010585268865]
We consider constructing an automatic soccer game commentary model to improve the audiences' viewing experience.
First, observing the prevalent video-text misalignment in existing datasets, we manually annotate timestamps for 49 matches.
Second, we propose a multi-modal temporal alignment pipeline to automatically correct and filter the existing dataset at scale.
Third, based on our curated dataset, we train an automatic commentary generation model, named MatchVoice.
arXiv Detail & Related papers (2024-06-26T17:57:25Z) - Humor Mechanics: Advancing Humor Generation with Multistep Reasoning [11.525355831490828]
We develop a working prototype for humor generation using multi-step reasoning.
We compare our approach with human-created jokes, zero-shot GPT-4 generated humor, and other baselines.
Our findings demonstrate that the multi-step reasoning approach consistently improves the quality of generated humor.
arXiv Detail & Related papers (2024-05-12T13:00:14Z) - Getting Serious about Humor: Crafting Humor Datasets with Unfunny Large Language Models [27.936545041302377]
Large language models (LLMs) can generate synthetic data for humor detection via editing texts.
We benchmark LLMs on an existing human dataset and show that current LLMs display an impressive ability to 'unfun' jokes.
We extend our approach to a code-mixed English-Hindi humor dataset, where we find that GPT-4's synthetic data is highly rated by bilingual annotators.
arXiv Detail & Related papers (2024-02-23T02:58:12Z) - The Naughtyformer: A Transformer Understands Offensive Humor [63.05016513788047]
We introduce a novel jokes dataset filtered from Reddit and solve the subtype classification task using a finetuned Transformer dubbed the Naughtyformer.
We show that our model is significantly better at detecting offensiveness in jokes compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-11-25T20:37:58Z) - ExPUNations: Augmenting Puns with Keywords and Explanations [88.58174386894913]
We augment an existing dataset of puns with detailed crowdsourced annotations of keywords.
This is the first humor dataset with such extensive and fine-grained annotations specifically for puns.
We propose two tasks: explanation generation to aid with pun classification and keyword-conditioned pun generation.
arXiv Detail & Related papers (2022-10-24T18:12:02Z) - Unsupervised Neural Stylistic Text Generation using Transfer learning
and Adapters [66.17039929803933]
We propose a novel transfer learning framework which updates only $0.3%$ of model parameters to learn style specific attributes for response generation.
We learn style specific attributes from the PERSONALITY-CAPTIONS dataset.
arXiv Detail & Related papers (2022-10-07T00:09:22Z) - Towards Multimodal Prediction of Spontaneous Humour: A Novel Dataset and First Results [84.37263300062597]
Humor is a substantial element of human social behavior, affect, and cognition.
Current methods of humor detection have been exclusively based on staged data, making them inadequate for "real-world" applications.
We contribute to addressing this deficiency by introducing the novel Passau-Spontaneous Football Coach Humor dataset, comprising about 11 hours of recordings.
arXiv Detail & Related papers (2022-09-28T17:36:47Z) - "So You Think You're Funny?": Rating the Humour Quotient in Standup
Comedy [24.402762942487367]
We devise a novel scoring mechanism to annotate the training data with a humour quotient score using the audience's laughter.
The normalized duration (laughter duration divided by the clip duration) of laughter in each clip is used to compute this humour score on a five-point scale (0-4)
We use this dataset to train a model that provides a "funniness" score, on a five-point scale, given the audio and its corresponding text.
arXiv Detail & Related papers (2021-10-25T09:46:46Z) - "Judge me by my size (noun), do you?'' YodaLib: A Demographic-Aware
Humor Generation Framework [31.115389392654492]
We propose an automatic humor generation framework for Mad Libs stories, while accounting for the demographic backgrounds of the desired audience.
We build upon the BERT platform to predict location-biased word fillings in incomplete sentences, and we fine tune BERT to classify location-specific humor in a sentence.
We leverage these components to produce YodaLib, a fully-automated Mad Libs style humor generation framework, which selects and ranks appropriate candidate words and sentences.
arXiv Detail & Related papers (2020-05-31T18:11:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.