MEMEX: Detecting Explanatory Evidence for Memes via Knowledge-Enriched
Contextualization
- URL: http://arxiv.org/abs/2305.15913v2
- Date: Sat, 27 May 2023 13:09:46 GMT
- Title: MEMEX: Detecting Explanatory Evidence for Memes via Knowledge-Enriched
Contextualization
- Authors: Shivam Sharma, Ramaneswaran S, Udit Arora, Md. Shad Akhtar and Tanmoy
Chakraborty
- Abstract summary: We propose a novel task, MEMEX, given a meme and a related document, the aim is to mine the context that succinctly explains the background of the meme.
To benchmark MCC, we propose MIME, a multimodal neural framework that uses common sense enriched meme representation and a layered approach to capture the cross-modal semantic dependencies between the meme and the context.
- Score: 31.209594252045566
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Memes are a powerful tool for communication over social media. Their affinity
for evolving across politics, history, and sociocultural phenomena makes them
an ideal communication vehicle. To comprehend the subtle message conveyed
within a meme, one must understand the background that facilitates its holistic
assimilation. Besides digital archiving of memes and their metadata by a few
websites like knowyourmeme.com, currently, there is no efficient way to deduce
a meme's context dynamically. In this work, we propose a novel task, MEMEX -
given a meme and a related document, the aim is to mine the context that
succinctly explains the background of the meme. At first, we develop MCC (Meme
Context Corpus), a novel dataset for MEMEX. Further, to benchmark MCC, we
propose MIME (MultImodal Meme Explainer), a multimodal neural framework that
uses common sense enriched meme representation and a layered approach to
capture the cross-modal semantic dependencies between the meme and the context.
MIME surpasses several unimodal and multimodal systems and yields an absolute
improvement of ~ 4% F1-score over the best baseline. Lastly, we conduct
detailed analyses of MIME's performance, highlighting the aspects that could
lead to optimal modeling of cross-modal contextual associations.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.