gundapusunil at SemEval-2020 Task 8: Multimodal Memotion Analysis
- URL: http://arxiv.org/abs/2010.04470v1
- Date: Fri, 9 Oct 2020 09:53:14 GMT
- Title: gundapusunil at SemEval-2020 Task 8: Multimodal Memotion Analysis
- Authors: Sunil Gundapu, Radhika Mamidi
- Abstract summary: We present a multi-modal sentiment analysis system using deep neural networks combining Computer Vision and Natural Language Processing.
Our aim is different than the normal sentiment analysis goal of predicting whether a text expresses positive or negative sentiment.
Our system has been developed using CNN and LSTM and outperformed the baseline score.
- Score: 7.538482310185133
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent technological advancements in the Internet and Social media usage have
resulted in the evolution of faster and efficient platforms of communication.
These platforms include visual, textual and speech mediums and have brought a
unique social phenomenon called Internet memes. Internet memes are in the form
of images with witty, catchy, or sarcastic text descriptions. In this paper, we
present a multi-modal sentiment analysis system using deep neural networks
combining Computer Vision and Natural Language Processing. Our aim is different
than the normal sentiment analysis goal of predicting whether a text expresses
positive or negative sentiment; instead, we aim to classify the Internet meme
as a positive, negative, or neutral, identify the type of humor expressed and
quantify the extent to which a particular effect is being expressed. Our system
has been developed using CNN and LSTM and outperformed the baseline score.
Related papers
- XMeCap: Meme Caption Generation with Sub-Image Adaptability [53.2509590113364]
Humor, deeply rooted in societal meanings and cultural details, poses a unique challenge for machines.
We introduce the textscXMeCap framework, which adopts supervised fine-tuning and reinforcement learning.
textscXMeCap achieves an average evaluation score of 75.85 for single-image memes and 66.32 for multi-image memes, outperforming the best baseline by 3.71% and 4.82%, respectively.
arXiv Detail & Related papers (2024-07-24T10:51:46Z) - SoMeLVLM: A Large Vision Language Model for Social Media Processing [78.47310657638567]
We introduce a Large Vision Language Model for Social Media Processing (SoMeLVLM)
SoMeLVLM is a cognitive framework equipped with five key capabilities including knowledge & comprehension, application, analysis, evaluation, and creation.
Our experiments demonstrate that SoMeLVLM achieves state-of-the-art performance in multiple social media tasks.
arXiv Detail & Related papers (2024-02-20T14:02:45Z) - Meme-ingful Analysis: Enhanced Understanding of Cyberbullying in Memes
Through Multimodal Explanations [48.82168723932981]
We introduce em MultiBully-Ex, the first benchmark dataset for multimodal explanation from code-mixed cyberbullying memes.
A Contrastive Language-Image Pretraining (CLIP) approach has been proposed for visual and textual explanation of a meme.
arXiv Detail & Related papers (2024-01-18T11:24:30Z) - Attention-based Interactive Disentangling Network for Instance-level
Emotional Voice Conversion [81.1492897350032]
Emotional Voice Conversion aims to manipulate a speech according to a given emotion while preserving non-emotion components.
We propose an Attention-based Interactive diseNtangling Network (AINN) that leverages instance-wise emotional knowledge for voice conversion.
arXiv Detail & Related papers (2023-12-29T08:06:45Z) - Countering Malicious Content Moderation Evasion in Online Social
Networks: Simulation and Detection of Word Camouflage [64.78260098263489]
Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems.
This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content.
arXiv Detail & Related papers (2022-12-27T16:08:49Z) - Emotion Analysis using Multi-Layered Networks for Graphical
Representation of Tweets [0.10499611180329801]
The paper proposes a novel algorithm that graphically models social media text using multi-layered networks (MLNs) in order to better encode relationships across independent sets of tweets.
State of the art Graph Neural Networks (GNNs) are used to extract information from the Tweet-MLN and make predictions based on the extracted graph features.
Results show that not only does the MLTA predict from a larger set of possible emotions, delivering a more accurate sentiment compared to the standard positive, negative or neutral, it also allows for accurate group-level predictions of Twitter data.
arXiv Detail & Related papers (2022-07-02T20:26:55Z) - Detecting and Understanding Harmful Memes: A Survey [48.135415967633676]
We offer a comprehensive survey with a focus on harmful memes.
One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism.
Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual.
arXiv Detail & Related papers (2022-05-09T13:43:27Z) - Do Images really do the Talking? Analysing the significance of Images in
Tamil Troll meme classification [0.16863755729554888]
We try to explore the significance of visual features of images in classifying memes.
We try to incorporate the memes as troll and non-trolling memes based on the images and the text on them.
arXiv Detail & Related papers (2021-08-09T09:04:42Z) - SemEval-2020 Task 8: Memotion Analysis -- The Visuo-Lingual Metaphor! [20.55903557920223]
The objective of this proposal is to bring the attention of the research community towards the automatic processing of Internet memes.
The task Memotion analysis released approx 10K annotated memes, with human-annotated labels namely sentiment (positive, negative, neutral), type of emotion (sarcastic, funny, offensive, motivation) and corresponding intensity.
The challenge consisted of three subtasks: sentiment (positive, negative, and neutral) analysis of memes, overall emotion (humour, sarcasm, offensive, and motivational) classification of memes, and classifying intensity of meme emotion.
arXiv Detail & Related papers (2020-08-09T18:17:33Z) - YNU-HPCC at SemEval-2020 Task 8: Using a Parallel-Channel Model for
Memotion Analysis [11.801902984731129]
This paper proposes a parallel-channel model to process the textual and visual information in memes.
In the shared task of identifying and categorizing memes, we preprocess the dataset according to the language behaviors on social media.
We then adapt and fine-tune the Bidirectional Representations from Transformers (BERT), and two types of convolutional neural network models (CNNs) were used to extract the features from the pictures.
arXiv Detail & Related papers (2020-07-28T03:20:31Z) - IITK at SemEval-2020 Task 8: Unimodal and Bimodal Sentiment Analysis of
Internet Memes [2.2385755093672044]
We present our approaches for the Memotion Analysis problem as posed in SemEval-2020 Task 8.
The goal of this task is to classify memes based on their emotional content and sentiment.
Our results show that a text-only approach, a simple Feed Forward Neural Network (FFNN) with Word2vec embeddings as input, performs superior to all the others.
arXiv Detail & Related papers (2020-07-21T14:06:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.