"Subverting the Jewtocracy": Online Antisemitism Detection Using
Multimodal Deep Learning
- URL: http://arxiv.org/abs/2104.05947v1
- Date: Tue, 13 Apr 2021 05:22:55 GMT
- Title: "Subverting the Jewtocracy": Online Antisemitism Detection Using
Multimodal Deep Learning
- Authors: Mohit Chandra, Dheeraj Pailla, Himanshu Bhatia, Aadilmehdi Sanchawala,
Manish Gupta, Manish Shrivastava, Ponnurangam Kumaraguru
- Abstract summary: We present the first work in the direction of automated multimodal detection of online antisemitism.
We label two datasets with 3,102 and 3,509 social media posts from Twitter and Gab respectively.
We present a multimodal deep learning system that detects the presence of antisemitic content and its specific antisemitism category using text and images from posts.
- Score: 23.048101866010445
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The exponential rise of online social media has enabled the creation,
distribution, and consumption of information at an unprecedented rate. However,
it has also led to the burgeoning of various forms of online abuse. Increasing
cases of online antisemitism have become one of the major concerns because of
its socio-political consequences. Unlike other major forms of online abuse like
racism, sexism, etc., online antisemitism has not been studied much from a
machine learning perspective. To the best of our knowledge, we present the
first work in the direction of automated multimodal detection of online
antisemitism. The task poses multiple challenges that include extracting
signals across multiple modalities, contextual references, and handling
multiple aspects of antisemitism. Unfortunately, there does not exist any
publicly available benchmark corpus for this critical task. Hence, we collect
and label two datasets with 3,102 and 3,509 social media posts from Twitter and
Gab respectively. Further, we present a multimodal deep learning system that
detects the presence of antisemitic content and its specific antisemitism
category using text and images from posts. We perform an extensive set of
experiments on the two datasets to evaluate the efficacy of the proposed
system. Finally, we also present a qualitative analysis of our study.
Related papers
- Monitoring the evolution of antisemitic discourse on extremist social media using BERT [3.3037858066178662]
Racism and intolerance on social media contribute to a toxic online environment which may spill offline to foster hatred.
Tracking antisemitic themes and their associated terminology over time in online discussions could help monitor the sentiments of their participants.
arXiv Detail & Related papers (2024-02-06T20:34:49Z) - SADAS: A Dialogue Assistant System Towards Remediating Norm Violations
in Bilingual Socio-Cultural Conversations [56.31816995795216]
Socially-Aware Dialogue Assistant System (SADAS) is designed to ensure that conversations unfold with respect and understanding.
Our system's novel architecture includes: (1) identifying the categories of norms present in the dialogue, (2) detecting potential norm violations, (3) evaluating the severity of these violations, and (4) implementing targeted remedies to rectify the breaches.
arXiv Detail & Related papers (2024-01-29T08:54:21Z) - How toxic is antisemitism? Potentials and limitations of automated
toxicity scoring for antisemitic online content [0.0]
Perspective API is a text toxicity assessment service by Google and Jigsaw.
We show how toxic antisemitic texts are rated and how the toxicity scores differ regarding different subforms of antisemitism.
We show that, on a basic level, Perspective API recognizes antisemitic content as toxic, but shows critical weaknesses with respect to non-explicit forms of antisemitism.
arXiv Detail & Related papers (2023-10-05T15:23:04Z) - Codes, Patterns and Shapes of Contemporary Online Antisemitism and
Conspiracy Narratives -- an Annotation Guide and Labeled German-Language
Dataset in the Context of COVID-19 [0.0]
Antisemitic and conspiracy theory content on the Internet makes data-driven algorithmic approaches essential.
We develop an annotation guide for antisemitic and conspiracy theory online content in the context of the COVID-19 pandemic.
We provide working definitions, including specific forms of antisemitism such as encoded and post-Holocaust antisemitism.
arXiv Detail & Related papers (2022-10-13T10:32:39Z) - DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally
Spreading Out Disinformation [72.18912216025029]
We present DisinfoMeme to help detect disinformation memes.
The dataset contains memes mined from Reddit covering three current topics: the COVID-19 pandemic, the Black Lives Matter movement, and veganism/vegetarianism.
arXiv Detail & Related papers (2022-05-25T09:54:59Z) - DISARM: Detecting the Victims Targeted by Harmful Memes [49.12165815990115]
DISARM is a framework that uses named entity recognition and person identification to detect harmful memes.
We show that DISARM significantly outperforms ten unimodal and multimodal systems.
It can reduce the relative error rate for harmful target identification by up to 9 points absolute over several strong multimodal rivals.
arXiv Detail & Related papers (2022-05-11T19:14:26Z) - Detecting and Understanding Harmful Memes: A Survey [48.135415967633676]
We offer a comprehensive survey with a focus on harmful memes.
One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism.
Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual.
arXiv Detail & Related papers (2022-05-09T13:43:27Z) - Online antisemitism across platforms [0.0]
This Explainable AI will identify English and German anti-Semitic expressions of dehumanization, verbal aggression and conspiracies in online social media messages across platforms.
arXiv Detail & Related papers (2021-12-14T23:06:21Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.