Don't Take it Personally: Analyzing Gender and Age Differences in
Ratings of Online Humor
- URL: http://arxiv.org/abs/2208.10898v1
- Date: Tue, 23 Aug 2022 12:04:36 GMT
- Title: Don't Take it Personally: Analyzing Gender and Age Differences in
Ratings of Online Humor
- Authors: J. A. Meaney, Steven R. Wilson, Luis Chiruzzo, Walid Magdy
- Abstract summary: We analyze a dataset of humor and offense ratings by male and female annotators of different age groups.
We find that women link humor and offense more strongly than men, and they tend to give lower humor ratings and higher offense scores.
Although there were no gender or age differences in humor detection, women and older annotators signalled that they did not understand joke texts more often than men.
- Score: 12.253859107637727
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computational humor detection systems rarely model the subjectivity of humor
responses, or consider alternative reactions to humor - namely offense. We
analyzed a large dataset of humor and offense ratings by male and female
annotators of different age groups. We find that women link these two concepts
more strongly than men, and they tend to give lower humor ratings and higher
offense scores. We also find that the correlation between humor and offense
increases with age. Although there were no gender or age differences in humor
detection, women and older annotators signalled that they did not understand
joke texts more often than men. We discuss implications for computational humor
detection and downstream tasks.
Related papers
- Getting Serious about Humor: Crafting Humor Datasets with Unfunny Large Language Models [27.936545041302377]
Large language models (LLMs) can generate synthetic data for humor detection via editing texts.
We benchmark LLMs on an existing human dataset and show that current LLMs display an impressive ability to 'unfun' jokes.
We extend our approach to a code-mixed English-Hindi humor dataset, where we find that GPT-4's synthetic data is highly rated by bilingual annotators.
arXiv Detail & Related papers (2024-02-23T02:58:12Z) - ChatGPT is fun, but it is not funny! Humor is still challenging Large
Language Models [19.399535453449488]
OpenAI's ChatGPT model almost seems to communicate on a human level and can even tell jokes.
In a series of exploratory experiments around jokes, i.e., generation, explanation, and detection, we seek to understand ChatGPT's capability to grasp and reproduce human humor.
Our empirical evidence indicates that jokes are not hard-coded but mostly also not newly generated by the model.
arXiv Detail & Related papers (2023-06-07T16:10:21Z) - The Naughtyformer: A Transformer Understands Offensive Humor [63.05016513788047]
We introduce a novel jokes dataset filtered from Reddit and solve the subtype classification task using a finetuned Transformer dubbed the Naughtyformer.
We show that our model is significantly better at detecting offensiveness in jokes compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-11-25T20:37:58Z) - ExPUNations: Augmenting Puns with Keywords and Explanations [88.58174386894913]
We augment an existing dataset of puns with detailed crowdsourced annotations of keywords.
This is the first humor dataset with such extensive and fine-grained annotations specifically for puns.
We propose two tasks: explanation generation to aid with pun classification and keyword-conditioned pun generation.
arXiv Detail & Related papers (2022-10-24T18:12:02Z) - Towards Multimodal Prediction of Spontaneous Humour: A Novel Dataset and First Results [84.37263300062597]
Humor is a substantial element of human social behavior, affect, and cognition.
Current methods of humor detection have been exclusively based on staged data, making them inadequate for "real-world" applications.
We contribute to addressing this deficiency by introducing the novel Passau-Spontaneous Football Coach Humor dataset, comprising about 11 hours of recordings.
arXiv Detail & Related papers (2022-09-28T17:36:47Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z) - Uncertainty and Surprisal Jointly Deliver the Punchline: Exploiting
Incongruity-Based Features for Humor Recognition [0.6445605125467573]
We break down any joke into two distinct components: the set-up and the punchline.
Inspired by the incongruity theory of humor, we model the set-up as the part developing semantic uncertainty.
With increasingly powerful language models, we were able to feed the set-up along with the punchline into the GPT-2 language model.
arXiv Detail & Related papers (2020-12-22T13:48:09Z) - Federated Learning with Diversified Preference for Humor Recognition [40.89453484353102]
We propose the FedHumor approach to recognize humorous text contents in a personalized manner through federated learning (FL)
Experiments demonstrate significant advantages of FedHumor in recognizing humor contents accurately for people with diverse humor preferences compared to 9 state-of-the-art humor recognition approaches.
arXiv Detail & Related papers (2020-12-03T03:24:24Z) - "The Boating Store Had Its Best Sail Ever": Pronunciation-attentive
Contextualized Pun Recognition [80.59427655743092]
We propose Pronunciation-attentive Contextualized Pun Recognition (PCPR) to perceive human humor.
PCPR derives contextualized representation for each word in a sentence by capturing the association between the surrounding context and its corresponding phonetic symbols.
Results demonstrate that the proposed approach significantly outperforms the state-of-the-art methods in pun detection and location tasks.
arXiv Detail & Related papers (2020-04-29T20:12:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.