SemEval-2020 Task 8: Memotion Analysis -- The Visuo-Lingual Metaphor!
- URL: http://arxiv.org/abs/2008.03781v1
- Date: Sun, 9 Aug 2020 18:17:33 GMT
- Title: SemEval-2020 Task 8: Memotion Analysis -- The Visuo-Lingual Metaphor!
- Authors: Chhavi Sharma and Deepesh Bhageria and William Scott and Srinivas PYKL
and Amitava Das and Tanmoy Chakraborty and Viswanath Pulabaigari and Bjorn
Gamback
- Abstract summary: The objective of this proposal is to bring the attention of the research community towards the automatic processing of Internet memes.
The task Memotion analysis released approx 10K annotated memes, with human-annotated labels namely sentiment (positive, negative, neutral), type of emotion (sarcastic, funny, offensive, motivation) and corresponding intensity.
The challenge consisted of three subtasks: sentiment (positive, negative, and neutral) analysis of memes, overall emotion (humour, sarcasm, offensive, and motivational) classification of memes, and classifying intensity of meme emotion.
- Score: 20.55903557920223
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Information on social media comprises of various modalities such as textual,
visual and audio. NLP and Computer Vision communities often leverage only one
prominent modality in isolation to study social media. However, the
computational processing of Internet memes needs a hybrid approach. The growing
ubiquity of Internet memes on social media platforms such as Facebook,
Instagram, and Twiter further suggests that we can not ignore such multimodal
content anymore. To the best of our knowledge, there is not much attention
towards meme emotion analysis. The objective of this proposal is to bring the
attention of the research community towards the automatic processing of
Internet memes. The task Memotion analysis released approx 10K annotated memes,
with human-annotated labels namely sentiment (positive, negative, neutral),
type of emotion (sarcastic, funny, offensive, motivation) and their
corresponding intensity. The challenge consisted of three subtasks: sentiment
(positive, negative, and neutral) analysis of memes, overall emotion (humour,
sarcasm, offensive, and motivational) classification of memes, and classifying
intensity of meme emotion. The best performances achieved were F1 (macro
average) scores of 0.35, 0.51 and 0.32, respectively for each of the three
subtasks.
Related papers
- XMeCap: Meme Caption Generation with Sub-Image Adaptability [53.2509590113364]
Humor, deeply rooted in societal meanings and cultural details, poses a unique challenge for machines.
We introduce the textscXMeCap framework, which adopts supervised fine-tuning and reinforcement learning.
textscXMeCap achieves an average evaluation score of 75.85 for single-image memes and 66.32 for multi-image memes, outperforming the best baseline by 3.71% and 4.82%, respectively.
arXiv Detail & Related papers (2024-07-24T10:51:46Z) - Meme-ingful Analysis: Enhanced Understanding of Cyberbullying in Memes
Through Multimodal Explanations [48.82168723932981]
We introduce em MultiBully-Ex, the first benchmark dataset for multimodal explanation from code-mixed cyberbullying memes.
A Contrastive Language-Image Pretraining (CLIP) approach has been proposed for visual and textual explanation of a meme.
arXiv Detail & Related papers (2024-01-18T11:24:30Z) - Why Do You Feel This Way? Summarizing Triggers of Emotions in Social
Media Posts [61.723046082145416]
We introduce CovidET (Emotions and their Triggers during Covid-19), a dataset of 1,900 English Reddit posts related to COVID-19.
We develop strong baselines to jointly detect emotions and summarize emotion triggers.
Our analyses show that CovidET presents new challenges in emotion-specific summarization, as well as multi-emotion detection in long social media posts.
arXiv Detail & Related papers (2022-10-22T19:10:26Z) - Detecting and Understanding Harmful Memes: A Survey [48.135415967633676]
We offer a comprehensive survey with a focus on harmful memes.
One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism.
Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual.
arXiv Detail & Related papers (2022-05-09T13:43:27Z) - Multimodal Analysis of memes for sentiment extraction [0.0]
The study is based on the Memotion dataset, which involves categorising memes based on irony, comedy, motivation, and overall-sentiment.
The best algorithm achieved a macro F1 score of 0.633 for humour classification, 0.55 for motivation classification, 0.61 for sarcasm classification, and 0.575 for overall sentiment of the meme.
arXiv Detail & Related papers (2021-12-22T12:57:05Z) - Detecting Harmful Memes and Their Targets [27.25262711136056]
We present HarMeme, the first benchmark dataset, containing 3,544 memes related to COVID-19.
In the first stage, we labeled a meme as very harmful, partially harmful, or harmless; in the second stage, we further annotated the type of target(s) that each harmful meme points to.
The evaluation results using ten unimodal and multimodal models highlight the importance of using multimodal signals for both tasks.
arXiv Detail & Related papers (2021-09-24T17:11:42Z) - Memes in the Wild: Assessing the Generalizability of the Hateful Memes
Challenge Dataset [47.65948529524281]
We collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset.
We find that memes in the wild differ in two key aspects: 1) Captions must be extracted via OCR, and 2) Memes are more diverse than traditional memes', including screenshots of conversations or text on a plain background.
arXiv Detail & Related papers (2021-07-09T09:04:05Z) - Entropy and complexity unveil the landscape of memes evolution [105.59074436693487]
We study the evolution of 2 million visual memes from Reddit over ten years, from 2011 to 2020.
We find support for the hypothesis that memes are part of an emerging form of internet metalanguage.
arXiv Detail & Related papers (2021-05-26T07:41:09Z) - Exercise? I thought you said 'Extra Fries': Leveraging Sentence
Demarcations and Multi-hop Attention for Meme Affect Analysis [18.23523076710257]
We propose a multi-hop attention-based deep neural network framework, called MHA-MEME.
Its prime objective is to leverage the spatial-domain correspondence between the visual modality (an image) and various textual segments to extract fine-grained feature representations for classification.
We evaluate MHA-MEME on the 'Memotion Analysis' dataset for all three sub-tasks - sentiment classification, affect classification, and affect class quantification.
arXiv Detail & Related papers (2021-03-23T08:21:37Z) - Dissecting the Meme Magic: Understanding Indicators of Virality in Image
Memes [11.491215688828518]
We find that highly viral memes are more likely to use a close-up scale, contain characters, and include positive or negative emotions.
On the other hand, image memes that do not present a clear subject the viewer can focus attention on, or that include long text are not likely to be re-shared by users.
We train machine learning models to distinguish between image memes that are likely to go viral and those that are unlikely to be re-shared.
arXiv Detail & Related papers (2021-01-16T22:36:51Z) - gundapusunil at SemEval-2020 Task 8: Multimodal Memotion Analysis [7.538482310185133]
We present a multi-modal sentiment analysis system using deep neural networks combining Computer Vision and Natural Language Processing.
Our aim is different than the normal sentiment analysis goal of predicting whether a text expresses positive or negative sentiment.
Our system has been developed using CNN and LSTM and outperformed the baseline score.
arXiv Detail & Related papers (2020-10-09T09:53:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.