It's a Thin Line Between Love and Hate: Using the Echo in Modeling
Dynamics of Racist Online Communities
- URL: http://arxiv.org/abs/2012.01133v1
- Date: Mon, 16 Nov 2020 20:47:54 GMT
- Title: It's a Thin Line Between Love and Hate: Using the Echo in Modeling
Dynamics of Racist Online Communities
- Authors: Eyal Arviv, Simo Hanouna, Oren Tsur
- Abstract summary: The (((echo)) symbol made it to mainstream social networks in early 2016, with the intensification of the U.S. Presidential race.
It was used by members of the alt-right, white supremacists and internet trolls to tag people of Jewish heritage.
Tracking this trending meme, its meaning, and its function has proved elusive for its semantic ambiguity.
- Score: 0.8164433158925593
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The (((echo))) symbol -- triple parenthesis surrounding a name, made it to
mainstream social networks in early 2016, with the intensification of the U.S.
Presidential race. It was used by members of the alt-right, white supremacists
and internet trolls to tag people of Jewish heritage -- a modern incarnation of
the infamous yellow badge (Judenstern) used in Nazi-Germany. Tracking this
trending meme, its meaning, and its function has proved elusive for its
semantic ambiguity (e.g., a symbol for a virtual hug).
In this paper we report of the construction of an appropriate dataset
allowing the reconstruction of networks of racist communities and the way they
are embedded in the broader community. We combine natural language processing
and structural network analysis to study communities promoting hate. In order
to overcome dog-whistling and linguistic ambiguity, we propose a multi-modal
neural architecture based on a BERT transformer and a BiLSTM network on the
tweet level, while also taking into account the users ego-network and meta
features. Our multi-modal neural architecture outperforms a set of strong
baselines. We further show how the the use of language and network structure in
tandem allows the detection of the leaders of the hate communities. We further
study the ``intersectionality'' of hate and show that the antisemitic echo
correlates with hate speech that targets other minority and protected groups.
Finally, we analyze the role IRA trolls assumed in this network as part of the
Russian interference campaign. Our findings allow a better understanding of
recent manifestations of racism and the dynamics that facilitate it.
Related papers
- Silent Signals, Loud Impact: LLMs for Word-Sense Disambiguation of Coded Dog Whistles [47.61526125774749]
A dog whistle is a form of coded communication that carries a secondary meaning to specific audiences and is often weaponized for racial and socioeconomic discrimination.
We present an approach for word-sense disambiguation of dog whistles from standard speech using Large Language Models (LLMs)
We leverage this technique to create a dataset of 16,550 high-confidence coded examples of dog whistles used in formal and informal communication.
arXiv Detail & Related papers (2024-06-10T23:09:19Z) - Monitoring the evolution of antisemitic discourse on extremist social media using BERT [3.3037858066178662]
Racism and intolerance on social media contribute to a toxic online environment which may spill offline to foster hatred.
Tracking antisemitic themes and their associated terminology over time in online discussions could help monitor the sentiments of their participants.
arXiv Detail & Related papers (2024-02-06T20:34:49Z) - From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language
Models [73.25963871034858]
We present the first large-scale computational investigation of dogwhistles.
We develop a typology of dogwhistles, curate the largest-to-date glossary of over 300 dogwhistles, and analyze their usage in historical U.S. politicians' speeches.
We show that harmful content containing dogwhistles avoids toxicity detection, highlighting online risks of such coded language.
arXiv Detail & Related papers (2023-05-26T18:00:57Z) - CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a
Context Synergized Hyperbolic Network [52.85130555886915]
CoSyn is a context-synergized neural network that explicitly incorporates user- and conversational context for detecting implicit hate speech in online conversations.
We show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24% - 57.8%.
arXiv Detail & Related papers (2023-03-02T17:30:43Z) - Hatemongers ride on echo chambers to escalate hate speech diffusion [23.714548893849393]
We analyze more than 32 million posts from over 6.8 million users across three popular online social networks.
We find that hatemongers play a more crucial role in governing the spread of information compared to singled-out hateful content.
arXiv Detail & Related papers (2023-02-05T20:30:48Z) - Improved two-stage hate speech classification for twitter based on Deep
Neural Networks [0.0]
Hate speech is a form of online harassment that involves the use of abusive language.
The model we propose in this work is an extension of an existing approach based on LSTM neural network architectures.
Our study includes a performance comparison of several proposed alternative methods for the second stage evaluated on a public corpus of 16k tweets.
arXiv Detail & Related papers (2022-06-08T20:57:41Z) - DISARM: Detecting the Victims Targeted by Harmful Memes [49.12165815990115]
DISARM is a framework that uses named entity recognition and person identification to detect harmful memes.
We show that DISARM significantly outperforms ten unimodal and multimodal systems.
It can reduce the relative error rate for harmful target identification by up to 9 points absolute over several strong multimodal rivals.
arXiv Detail & Related papers (2022-05-11T19:14:26Z) - Detecting White Supremacist Hate Speech using Domain Specific Word
Embedding with Deep Learning and BERT [0.0]
White supremacist hate speech is one of the most recently observed harmful content on social media.
This research investigates the viability of automatically detecting white supremacist hate speech on Twitter by using deep learning and natural language processing techniques.
arXiv Detail & Related papers (2020-10-01T12:44:24Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.