TROLLMAGNIFIER: Detecting State-Sponsored Troll Accounts on Reddit
- URL: http://arxiv.org/abs/2112.00443v1
- Date: Wed, 1 Dec 2021 12:10:24 GMT
- Title: TROLLMAGNIFIER: Detecting State-Sponsored Troll Accounts on Reddit
- Authors: Mohammad Hammas Saeed and Shiza Ali and Jeremy Blackburn and Emiliano
De Cristofaro and Savvas Zannettou and Gianluca Stringhini
- Abstract summary: We present TROLLMAGNIFIER, a detection system for troll accounts.
TROLLMAGNIFIER learns the typical behavior of known troll accounts and identifies more that behave similarly.
We show that using TROLLMAGNIFIER, one can grow the initial knowledge of potential trolls by over 300%.
- Score: 11.319938541673578
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Growing evidence points to recurring influence campaigns on social media,
often sponsored by state actors aiming to manipulate public opinion on
sensitive political topics. Typically, campaigns are performed through
instrumented accounts, known as troll accounts; despite their prominence,
however, little work has been done to detect these accounts in the wild. In
this paper, we present TROLLMAGNIFIER, a detection system for troll accounts.
Our key observation, based on analysis of known Russian-sponsored troll
accounts identified by Reddit, is that they show loose coordination, often
interacting with each other to further specific narratives. Therefore, troll
accounts controlled by the same actor often show similarities that can be
leveraged for detection. TROLLMAGNIFIER learns the typical behavior of known
troll accounts and identifies more that behave similarly. We train
TROLLMAGNIFIER on a set of 335 known troll accounts and run it on a large
dataset of Reddit accounts. Our system identifies 1,248 potential troll
accounts; we then provide a multi-faceted analysis to corroborate the
correctness of our classification. In particular, 66% of the detected accounts
show signs of being instrumented by malicious actors (e.g., they were created
on the same exact day as a known troll, they have since been suspended by
Reddit, etc.). They also discuss similar topics as the known troll accounts and
exhibit temporal synchronization in their activity. Overall, we show that using
TROLLMAGNIFIER, one can grow the initial knowledge of potential trolls provided
by Reddit by over 300%.
Related papers
- Coordinated Reply Attacks in Influence Operations: Characterization and Detection [43.98568073610101]
We characterize coordinated reply attacks in the context of influence operations on Twitter.
Our analysis reveals that the primary targets of these attacks are influential people such as journalists, news media, state officials, and politicians.
We propose two supervised machine-learning models, one to classify tweets to determine whether they are targeted by a reply attack, and one to classify accounts that reply to a targeted tweet to determine whether they are part of a coordinated attack.
arXiv Detail & Related papers (2024-10-25T02:57:08Z) - Russo-Ukrainian War: Prediction and explanation of Twitter suspension [47.61306219245444]
This study focuses on the Twitter suspension mechanism and the analysis of shared content and features of user accounts that may lead to this.
We have obtained a dataset containing 107.7M tweets, originating from 9.8 million users, using Twitter API.
Our results reveal scam campaigns taking advantage of trending topics regarding the Russia-Ukrainian conflict for Bitcoin fraud, spam, and advertisement campaigns.
arXiv Detail & Related papers (2023-06-06T08:41:02Z) - Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against
Fact-Verification Systems [80.3811072650087]
We show that it is possible to subtly modify claim-salient snippets in the evidence and generate diverse and claim-aligned evidence.
The attacks are also robust against post-hoc modifications of the claim.
These attacks can have harmful implications on the inspectable and human-in-the-loop usage scenarios.
arXiv Detail & Related papers (2022-09-07T13:39:24Z) - DISARM: Detecting the Victims Targeted by Harmful Memes [49.12165815990115]
DISARM is a framework that uses named entity recognition and person identification to detect harmful memes.
We show that DISARM significantly outperforms ten unimodal and multimodal systems.
It can reduce the relative error rate for harmful target identification by up to 9 points absolute over several strong multimodal rivals.
arXiv Detail & Related papers (2022-05-11T19:14:26Z) - Characterizing, Detecting, and Predicting Online Ban Evasion [9.949354222717773]
Malicious users can easily create a new account to evade online bans.
We conduct the first data-driven study of ban evasion, i.e., the act of circumventing bans on an online platform.
We find that evasion child accounts demonstrate similarities with respect to their banned parent accounts on several behavioral axes.
arXiv Detail & Related papers (2022-02-10T18:58:19Z) - Uncovering the Dark Side of Telegram: Fakes, Clones, Scams, and
Conspiracy Movements [67.39353554498636]
We perform a large-scale analysis of Telegram by collecting 35,382 different channels and over 130,000,000 messages.
We find some of the infamous activities also present on privacy-preserving services of the Dark Web, such as carding.
We propose a machine learning model that is able to identify fake channels with an accuracy of 86%.
arXiv Detail & Related papers (2021-11-26T14:53:31Z) - Exposing Paid Opinion Manipulation Trolls [19.834000431578737]
We show how to find paid trolls on the Web using machine learning.
In this paper, we assume that a user who is called a troll by several different people is likely to be such.
We compare the profiles of paid trolls vs. (ii)"mentioned" trolls vs. (iii) non-trolls, and we further show that a classifier trained to distinguish (ii) from (iii) does quite well also at telling apart (i) from (iii)
arXiv Detail & Related papers (2021-09-26T11:40:14Z) - Sentiment Analysis for Troll Detection on Weibo [6.961253535504979]
In China, the micro-blogging service provider, Sina Weibo, is the most popular such service.
To influence public opinion, Weibo trolls can be hired to post deceptive comments.
In this paper, we focus on troll detection via sentiment analysis and other user activity data.
arXiv Detail & Related papers (2021-03-07T14:59:12Z) - TrollHunter [Evader]: Automated Detection [Evasion] of Twitter Trolls
During the COVID-19 Pandemic [1.5469452301122175]
TrollHunter is an automated reasoning mechanism used to hunt for trolls on Twitter during the COVID-19 pandemic in 2020.
To counter the COVID-19 infodemic, the TrollHunter leverages a unique linguistic analysis of a multi-dimensional set of Twitter content features.
TrollHunter achieved 98.5% accuracy, 75.4% precision and 69.8% recall over a dataset of 1.3 million tweets.
arXiv Detail & Related papers (2020-12-04T13:46:42Z) - Russian trolls speaking Russian: Regional Twitter operations and MH17 [68.8204255655161]
In 2018, Twitter released data on accounts identified as Russian trolls.
We analyze the Russian-language operations of these trolls.
We find that trolls' information campaign on the MH17 crash was the largest in terms of tweet count.
arXiv Detail & Related papers (2020-05-13T19:48:12Z) - Detecting Troll Behavior via Inverse Reinforcement Learning: A Case
Study of Russian Trolls in the 2016 US Election [8.332032237125897]
We propose an approach based on Inverse Reinforcement Learning (IRL) to capture troll behavior and identify troll accounts.
As a study case, we consider the troll accounts identified by the US Congress during the investigation of Russian meddling in the 2016 US Presidential election.
We report promising results: the IRL-based approach is able to accurately detect troll accounts.
arXiv Detail & Related papers (2020-01-28T19:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.