On the rise of fear speech in online social media
- URL: http://arxiv.org/abs/2303.10311v1
- Date: Sat, 18 Mar 2023 02:46:49 GMT
- Title: On the rise of fear speech in online social media
- Authors: Punyajoy Saha, Kiran Garimella, Narla Komal Kalyan, Saurabh Kumar
Pandey, Pauras Mangesh Meher, Binny Mathew, and Animesh Mukherjee
- Abstract summary: Fear speech, as the name suggests, attempts to incite fear about a target community.
This article presents a large-scale study to understand the prevalence of 400K fear speech and over 700K hate speech posts collected from Gab.com.
- Score: 7.090807766284268
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently, social media platforms are heavily moderated to prevent the spread
of online hate speech, which is usually fertile in toxic words and is directed
toward an individual or a community. Owing to such heavy moderation, newer and
more subtle techniques are being deployed. One of the most striking among these
is fear speech. Fear speech, as the name suggests, attempts to incite fear
about a target community. Although subtle, it might be highly effective, often
pushing communities toward a physical conflict. Therefore, understanding their
prevalence in social media is of paramount importance. This article presents a
large-scale study to understand the prevalence of 400K fear speech and over
700K hate speech posts collected from Gab.com. Remarkably, users posting a
large number of fear speech accrue more followers and occupy more central
positions in social networks than users posting a large number of hate speech.
They can also reach out to benign users more effectively than hate speech users
through replies, reposts, and mentions. This connects to the fact that, unlike
hate speech, fear speech has almost zero toxic content, making it look
plausible. Moreover, while fear speech topics mostly portray a community as a
perpetrator using a (fake) chain of argumentation, hate speech topics hurl
direct multitarget insults, thus pointing to why general users could be more
gullible to fear speech. Our findings transcend even to other platforms
(Twitter and Facebook) and thus necessitate using sophisticated moderation
policies and mass awareness to combat fear speech.
Related papers
- Hostile Counterspeech Drives Users From Hate Subreddits [1.5035331281822]
We analyze the effect of counterspeech on newcomers within hate subreddits on Reddit.
Non-hostile counterspeech is ineffective at keeping users from fully disengaging from these hate subreddits.
A single hostile counterspeech comment substantially reduces both future likelihood of engagement.
arXiv Detail & Related papers (2024-05-28T17:12:41Z) - Analyzing User Characteristics of Hate Speech Spreaders on Social Media [20.57872238271025]
We analyze the role of user characteristics in hate speech resharing across different types of hate speech.
We find that users with little social influence tend to share more hate speech.
Political anti-Trump and anti-right-wing hate is reshared by users with larger social influence.
arXiv Detail & Related papers (2023-10-24T12:17:48Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a
Context Synergized Hyperbolic Network [52.85130555886915]
CoSyn is a context-synergized neural network that explicitly incorporates user- and conversational context for detecting implicit hate speech in online conversations.
We show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24% - 57.8%.
arXiv Detail & Related papers (2023-03-02T17:30:43Z) - Quantifying How Hateful Communities Radicalize Online Users [2.378428291297535]
We measure the impact of joining fringe hateful communities in terms of hate speech propagated to the rest of the social network.
We use data from Reddit to assess the effect of joining one type of echo chamber: a digital community of like-minded users exhibiting hateful behavior.
We show that the harmful speech does not remain contained within the community.
arXiv Detail & Related papers (2022-09-19T01:13:29Z) - Beyond Plain Toxic: Detection of Inappropriate Statements on Flammable
Topics for the Russian Language [76.58220021791955]
We present two text collections labelled according to binary notion of inapropriateness and a multinomial notion of sensitive topic.
To objectivise the notion of inappropriateness, we define it in a data-driven way though crowdsourcing.
arXiv Detail & Related papers (2022-03-04T15:59:06Z) - Nipping in the Bud: Detection, Diffusion and Mitigation of Hate Speech
on Social Media [21.47216483704825]
This article presents methodological challenges that hinder building automated hate mitigation systems.
We discuss a series of our proposed solutions to limit the spread of hate speech on social media.
arXiv Detail & Related papers (2022-01-04T03:44:46Z) - Comparing the Language of QAnon-related content on Parler, Gab, and
Twitter [68.8204255655161]
Parler, a "free speech" platform popular with conservatives, was taken offline in January 2021 due to the lack of moderation of hateful and QAnon- and other conspiracy-related content.
We compare posts with the hashtag #QAnon on Parler over a month-long period with posts on Twitter and Gab.
Gab has the highest proportion of #QAnon posts with hate terms, and Parler and Twitter are similar in this respect.
On all three platforms, posts mentioning female political figures, Democrats, or Donald Trump have more anti-social language than posts mentioning male politicians, Republicans, or
arXiv Detail & Related papers (2021-11-22T11:19:15Z) - "Short is the Road that Leads from Fear to Hate": Fear Speech in Indian
WhatsApp Groups [8.682669903229165]
We perform the first large scale study on fear speech across thousands of public WhatsApp groups discussing politics in India.
We build models to classify fear speech and observe that current state-of-the-art NLP models do not perform well at this task.
arXiv Detail & Related papers (2021-02-07T18:14:16Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.