DeHumor: Visual Analytics for Decomposing Humor
- URL: http://arxiv.org/abs/2107.08356v1
- Date: Sun, 18 Jul 2021 04:01:07 GMT
- Title: DeHumor: Visual Analytics for Decomposing Humor
- Authors: Xingbo Wang, Yao Ming, Tongshuang Wu, Haipeng Zeng, Yong Wang, Huamin
Qu
- Abstract summary: We develop DeHumor, a visual system for analyzing humorous behaviors in public speaking.
To intuitively reveal the building blocks of each concrete example, DeHumor decomposes each humorous video into multimodal features.
We show that DeHumor is able to highlight various building blocks of humor examples.
- Score: 36.300283476950796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite being a critical communication skill, grasping humor is challenging
-- a successful use of humor requires a mixture of both engaging content
build-up and an appropriate vocal delivery (e.g., pause). Prior studies on
computational humor emphasize the textual and audio features immediately next
to the punchline, yet overlooking longer-term context setup. Moreover, the
theories are usually too abstract for understanding each concrete humor
snippet. To fill in the gap, we develop DeHumor, a visual analytical system for
analyzing humorous behaviors in public speaking. To intuitively reveal the
building blocks of each concrete example, DeHumor decomposes each humorous
video into multimodal features and provides inline annotations of them on the
video script. In particular, to better capture the build-ups, we introduce
content repetition as a complement to features introduced in theories of
computational humor and visualize them in a context linking graph. To help
users locate the punchlines that have the desired features to learn, we
summarize the content (with keywords) and humor feature statistics on an
augmented time matrix. With case studies on stand-up comedy shows and TED
talks, we show that DeHumor is able to highlight various building blocks of
humor examples. In addition, expert interviews with communication coaches and
humor researchers demonstrate the effectiveness of DeHumor for multimodal humor
analysis of speech content and vocal delivery.
Related papers
- THInC: A Theory-Driven Framework for Computational Humor Detection [2.0960189135529212]
There is still no agreement on a single, comprehensive humor theory.
Most computational approaches to detecting humor are not based on existing humor theories.
This paper contributes to bridging this long-standing gap by creating an interpretable framework for humor classification.
arXiv Detail & Related papers (2024-09-02T13:09:26Z) - Can Pre-trained Language Models Understand Chinese Humor? [74.96509580592004]
This paper is the first work that systematically investigates the humor understanding ability of pre-trained language models (PLMs)
We construct a comprehensive Chinese humor dataset, which can fully meet all the data requirements of the proposed evaluation framework.
Our empirical study on the Chinese humor dataset yields some valuable observations, which are of great guiding value for future optimization of PLMs in humor understanding and generation.
arXiv Detail & Related papers (2024-07-04T18:13:38Z) - Can Language Models Laugh at YouTube Short-form Videos? [40.47384055149102]
We curate a user-generated dataset of 10K multimodal funny videos from YouTube, called ExFunTube.
Using a video filtering pipeline with GPT-3.5, we verify both verbal and visual elements contributing to humor.
After filtering, we annotate each video with timestamps and text explanations for funny moments.
arXiv Detail & Related papers (2023-10-22T03:01:38Z) - OxfordTVG-HIC: Can Machine Make Humorous Captions from Images? [27.899718595182172]
We present OxfordTVG-HIC (Humorous Image Captions), a large-scale dataset for humour generation and understanding.
OxfordTVG-HIC features a wide range of emotional and semantic diversity resulting in out-of-context examples.
We show how OxfordTVG-HIC can be leveraged for evaluating the humour of a generated text.
arXiv Detail & Related papers (2023-07-21T14:58:44Z) - ExPUNations: Augmenting Puns with Keywords and Explanations [88.58174386894913]
We augment an existing dataset of puns with detailed crowdsourced annotations of keywords.
This is the first humor dataset with such extensive and fine-grained annotations specifically for puns.
We propose two tasks: explanation generation to aid with pun classification and keyword-conditioned pun generation.
arXiv Detail & Related papers (2022-10-24T18:12:02Z) - Towards Multimodal Prediction of Spontaneous Humour: A Novel Dataset and First Results [84.37263300062597]
Humor is a substantial element of human social behavior, affect, and cognition.
Current methods of humor detection have been exclusively based on staged data, making them inadequate for "real-world" applications.
We contribute to addressing this deficiency by introducing the novel Passau-Spontaneous Football Coach Humor dataset, comprising about 11 hours of recordings.
arXiv Detail & Related papers (2022-09-28T17:36:47Z) - M2H2: A Multimodal Multiparty Hindi Dataset For Humor Recognition in
Conversations [72.81164101048181]
We propose a dataset for Multimodal Multiparty Hindi Humor (M2H2) recognition in conversations containing 6,191 utterances from 13 episodes of a very popular TV series "Shrimaan Shrimati Phir Se"
Each utterance is annotated with humor/non-humor labels and encompasses acoustic, visual, and textual modalities.
The empirical results on M2H2 dataset demonstrate that multimodal information complements unimodal information for humor recognition.
arXiv Detail & Related papers (2021-08-03T02:54:09Z) - Federated Learning with Diversified Preference for Humor Recognition [40.89453484353102]
We propose the FedHumor approach to recognize humorous text contents in a personalized manner through federated learning (FL)
Experiments demonstrate significant advantages of FedHumor in recognizing humor contents accurately for people with diverse humor preferences compared to 9 state-of-the-art humor recognition approaches.
arXiv Detail & Related papers (2020-12-03T03:24:24Z) - "The Boating Store Had Its Best Sail Ever": Pronunciation-attentive
Contextualized Pun Recognition [80.59427655743092]
We propose Pronunciation-attentive Contextualized Pun Recognition (PCPR) to perceive human humor.
PCPR derives contextualized representation for each word in a sentence by capturing the association between the surrounding context and its corresponding phonetic symbols.
Results demonstrate that the proposed approach significantly outperforms the state-of-the-art methods in pun detection and location tasks.
arXiv Detail & Related papers (2020-04-29T20:12:20Z) - Let's be Humorous: Knowledge Enhanced Humor Generation [26.886255899651893]
We explore how to generate a punchline given the set-up with the relevant knowledge.
To our knowledge, this is the first attempt to generate punchlines with knowledge enhanced model.
The experimental results demonstrate that our method can make use of knowledge to generate fluent, funny punchlines.
arXiv Detail & Related papers (2020-04-28T06:06:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.