Reading In-Between the Lines: An Analysis of Dissenter
- URL: http://arxiv.org/abs/2009.01772v2
- Date: Sat, 26 Sep 2020 15:16:59 GMT
- Title: Reading In-Between the Lines: An Analysis of Dissenter
- Authors: Erik Rye and Jeremy Blackburn and Robert Beverly
- Abstract summary: We study Dissenter, a browser and web application that provides a conversational overlay for any web page.
In this work, we obtain a history of Dissenter comments, users, and the websites being discussed.
Our corpus consists of approximately 1.68M comments made by 101k users commenting on 588k distinct URLs.
- Score: 2.2881898195409884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efforts by content creators and social networks to enforce legal and
policy-based norms, e.g. blocking hate speech and users, has driven the rise of
unrestricted communication platforms. One such recent effort is Dissenter, a
browser and web application that provides a conversational overlay for any web
page. These conversations hide in plain sight - users of Dissenter can see and
participate in this conversation, whereas visitors using other browsers are
oblivious to their existence. Further, the website and content owners have no
power over the conversation as it resides in an overlay outside their control.
In this work, we obtain a history of Dissenter comments, users, and the
websites being discussed, from the initial release of Dissenter in Feb. 2019
through Apr. 2020 (14 months). Our corpus consists of approximately 1.68M
comments made by 101k users commenting on 588k distinct URLs. We first analyze
macro characteristics of the network, including the user-base, comment
distribution, and growth. We then use toxicity dictionaries, Perspective API,
and a Natural Language Processing model to understand the nature of the
comments and measure the propensity of particular websites and content to
elicit hateful and offensive Dissenter comments. Using curated rankings of
media bias, we examine the conditional probability of hateful comments given
left and right-leaning content. Finally, we study Dissenter as a social
network, and identify a core group of users with high comment toxicity.
Related papers
- Analyzing Toxicity in Deep Conversations: A Reddit Case Study [0.0]
This work employs a tree-based approach to understand how users behave concerning toxicity in public conversation settings.
We collect both the posts and the comment sections of the top 100 posts from 8 Reddit communities that allow profanity, totaling over 1 million responses.
We find that toxic comments increase the likelihood of subsequent toxic comments being produced in online conversations.
arXiv Detail & Related papers (2024-04-11T16:10:44Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - Classification of social media Toxic comments using Machine learning
models [0.0]
The abstract outlines the problem of toxic comments on social media platforms, where individuals use disrespectful, abusive, and unreasonable language.
This behavior is referred to as anti-social behavior, which occurs during online debates, comments, and fights.
The comments containing explicit language can be classified into various categories, such as toxic, severe toxic, obscene, threat, insult, and identity hate.
To protect users from offensive language, companies have started flagging comments and blocking users.
arXiv Detail & Related papers (2023-04-14T05:40:11Z) - CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a
Context Synergized Hyperbolic Network [52.85130555886915]
CoSyn is a context-synergized neural network that explicitly incorporates user- and conversational context for detecting implicit hate speech in online conversations.
We show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24% - 57.8%.
arXiv Detail & Related papers (2023-03-02T17:30:43Z) - Going Extreme: Comparative Analysis of Hate Speech in Parler and Gab [2.487445341407889]
We provide the first large scale analysis of hate-speech on Parler.
In order to improve classification accuracy we annotated 10K Parler posts.
We find that hate mongers make 16.1% of Parler active users.
arXiv Detail & Related papers (2022-01-27T19:29:17Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Online Hate: Behavioural Dynamics and Relationship with Misinformation [0.0]
We perform hate speech detection on a corpus of more than one million comments on YouTube videos.
Our results show that, coherently with Godwin's law, online debates tend to degenerate towards increasingly toxic exchanges of views.
arXiv Detail & Related papers (2021-05-28T17:30:51Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.