What Makes Programmers Laugh? Exploring the Subreddit r/ProgrammerHumor
- URL: http://arxiv.org/abs/2410.07020v1
- Date: Wed, 9 Oct 2024 16:04:12 GMT
- Title: What Makes Programmers Laugh? Exploring the Subreddit r/ProgrammerHumor
- Authors: Miikka Kuutila, Leevi Rantala, Junhao Li, Simo Hosio, Mika Mäntylä,
- Abstract summary: This study aims to investigate programming-related humor in a large social media community.
We collected 139,718 submissions from Reddit subreddit r/ProgrammerHumor.
Our results indicate that predicting the humor of software developers is difficult.
- Score: 6.590885070238399
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background: Humor is a fundamental part of human communication, with prior work linking positive humor in the workplace to positive outcomes, such as improved performance and job satisfaction. Aims: This study aims to investigate programming-related humor in a large social media community. Methodology: We collected 139,718 submissions from Reddit subreddit r/ProgrammerHumor. Both textual and image-based (memes) submissions were considered. The image data was processed with OCR to extract text from images for NLP analysis. Multiple regression models were built to investigate what makes submissions humorous. Additionally, a random sample of 800 submissions was labeled by human annotators regarding their relation to theories of humor, suitability for the workplace, the need for programming knowledge to understand the submission, and whether images in image-based submissions added context to the submission. Results: Our results indicate that predicting the humor of software developers is difficult. Our best regression model was able to explain only 10% of the variance. However, statistically significant differences were observed between topics, submission times, and associated humor theories. Our analysis reveals that the highest submission scores are achieved by imagebased submissions that are created during the winter months in the northern hemisphere, between 2-3pm UTC on weekends, which are distinctly related to superiority and incongruity theories of humor, and are about the topic of "Learning". Conclusions: Predicting humor with natural language processing methods is challenging. We discuss the benefits and inherent difficulties in assessing perceived humor of submissions, as well as possible avenues for future work. Additionally, our replication package should help future studies and can act as a joke repository for the software industry and education.
Related papers
- THInC: A Theory-Driven Framework for Computational Humor Detection [2.0960189135529212]
There is still no agreement on a single, comprehensive humor theory.
Most computational approaches to detecting humor are not based on existing humor theories.
This paper contributes to bridging this long-standing gap by creating an interpretable framework for humor classification.
arXiv Detail & Related papers (2024-09-02T13:09:26Z) - Can Pre-trained Language Models Understand Chinese Humor? [74.96509580592004]
This paper is the first work that systematically investigates the humor understanding ability of pre-trained language models (PLMs)
We construct a comprehensive Chinese humor dataset, which can fully meet all the data requirements of the proposed evaluation framework.
Our empirical study on the Chinese humor dataset yields some valuable observations, which are of great guiding value for future optimization of PLMs in humor understanding and generation.
arXiv Detail & Related papers (2024-07-04T18:13:38Z) - Is AI fun? HumorDB: a curated dataset and benchmark to investigate graphical humor [8.75275650545552]
HumorDB is an image-only dataset specifically designed to advance visual humor understanding.
The dataset enables evaluation through binary classification, range regression, and pairwise comparison tasks.
HumorDB shows potential as a valuable benchmark for powerful large multimodal models.
arXiv Detail & Related papers (2024-06-19T13:51:40Z) - With Great Humor Comes Great Developer Engagement [11.367562045401554]
The more engaged developers are, the more value they impart to the software they create.
In this paper, we dive deep into an original vector of engagement - humor - and study how it fuels developer engagement.
We collect data about the humorous elements present within three significant, real-world software projects.
We receive unique insights from 125 developers, who share their real-life experiences with humor in software.
arXiv Detail & Related papers (2023-12-04T07:06:02Z) - The Naughtyformer: A Transformer Understands Offensive Humor [63.05016513788047]
We introduce a novel jokes dataset filtered from Reddit and solve the subtype classification task using a finetuned Transformer dubbed the Naughtyformer.
We show that our model is significantly better at detecting offensiveness in jokes compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-11-25T20:37:58Z) - How to Describe Images in a More Funny Way? Towards a Modular Approach
to Cross-Modal Sarcasm Generation [62.89586083449108]
We study a new problem of cross-modal sarcasm generation (CMSG), i.e., generating a sarcastic description for a given image.
CMSG is challenging as models need to satisfy the characteristics of sarcasm, as well as the correlation between different modalities.
We propose an Extraction-Generation-Ranking based Modular method (EGRM) for cross-model sarcasm generation.
arXiv Detail & Related papers (2022-11-20T14:38:24Z) - Towards Multimodal Prediction of Spontaneous Humour: A Novel Dataset and First Results [84.37263300062597]
Humor is a substantial element of human social behavior, affect, and cognition.
Current methods of humor detection have been exclusively based on staged data, making them inadequate for "real-world" applications.
We contribute to addressing this deficiency by introducing the novel Passau-Spontaneous Football Coach Humor dataset, comprising about 11 hours of recordings.
arXiv Detail & Related papers (2022-09-28T17:36:47Z) - Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks
from The New Yorker Caption Contest [70.40189243067857]
Large neural networks can now generate jokes, but do they really "understand" humor?
We challenge AI models with three tasks derived from the New Yorker Cartoon Caption Contest.
We find that both types of models struggle at all three tasks.
arXiv Detail & Related papers (2022-09-13T20:54:00Z) - A Shared Representation for Photorealistic Driving Simulators [83.5985178314263]
We propose to improve the quality of generated images by rethinking the discriminator architecture.
The focus is on the class of problems where images are generated given semantic inputs, such as scene segmentation maps or human body poses.
We aim to learn a shared latent representation that encodes enough information to jointly do semantic segmentation, content reconstruction, along with a coarse-to-fine grained adversarial reasoning.
arXiv Detail & Related papers (2021-12-09T18:59:21Z) - DeHumor: Visual Analytics for Decomposing Humor [36.300283476950796]
We develop DeHumor, a visual system for analyzing humorous behaviors in public speaking.
To intuitively reveal the building blocks of each concrete example, DeHumor decomposes each humorous video into multimodal features.
We show that DeHumor is able to highlight various building blocks of humor examples.
arXiv Detail & Related papers (2021-07-18T04:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.