Exposing Paid Opinion Manipulation Trolls
- URL: http://arxiv.org/abs/2109.13726v1
- Date: Sun, 26 Sep 2021 11:40:14 GMT
- Title: Exposing Paid Opinion Manipulation Trolls
- Authors: Todor Mihaylov, Ivan Koychev, Georgi Georgiev, Preslav Nakov
- Abstract summary: We show how to find paid trolls on the Web using machine learning.
In this paper, we assume that a user who is called a troll by several different people is likely to be such.
We compare the profiles of paid trolls vs. (ii)"mentioned" trolls vs. (iii) non-trolls, and we further show that a classifier trained to distinguish (ii) from (iii) does quite well also at telling apart (i) from (iii)
- Score: 19.834000431578737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, Web forums have been invaded by opinion manipulation trolls. Some
trolls try to influence the other users driven by their own convictions, while
in other cases they can be organized and paid, e.g., by a political party or a
PR agency that gives them specific instructions what to write. Finding paid
trolls automatically using machine learning is a hard task, as there is no
enough training data to train a classifier; yet some test data is possible to
obtain, as these trolls are sometimes caught and widely exposed. In this paper,
we solve the training data problem by assuming that a user who is called a
troll by several different people is likely to be such, and one who has never
been called a troll is unlikely to be such. We compare the profiles of (i) paid
trolls vs. (ii)"mentioned" trolls vs. (iii) non-trolls, and we further show
that a classifier trained to distinguish (ii) from (iii) does quite well also
at telling apart (i) from (iii).
Related papers
- Towards Effective Counter-Responses: Aligning Human Preferences with Strategies to Combat Online Trolling [9.598920004159696]
This paper investigates whether humans have preferred strategies tailored to different types of trolling behaviors.
We introduce a methodology for generating counter-responses to trolls by recommending appropriate RSs.
The experimental results demonstrate that our proposed approach guides constructive discussion and reduces the negative effects of trolls.
arXiv Detail & Related papers (2024-10-05T14:01:52Z) - QACHECK: A Demonstration System for Question-Guided Multi-Hop
Fact-Checking [68.06355980166053]
We propose the Question-guided Multi-hop Fact-Checking (QACHECK) system.
It guides the model's reasoning process by asking a series of questions critical for verifying a claim.
It provides the source of evidence supporting each question, fostering a transparent, explainable, and user-friendly fact-checking process.
arXiv Detail & Related papers (2023-10-11T15:51:53Z) - Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against
Fact-Verification Systems [80.3811072650087]
We show that it is possible to subtly modify claim-salient snippets in the evidence and generate diverse and claim-aligned evidence.
The attacks are also robust against post-hoc modifications of the claim.
These attacks can have harmful implications on the inspectable and human-in-the-loop usage scenarios.
arXiv Detail & Related papers (2022-09-07T13:39:24Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - TROLLMAGNIFIER: Detecting State-Sponsored Troll Accounts on Reddit [11.319938541673578]
We present TROLLMAGNIFIER, a detection system for troll accounts.
TROLLMAGNIFIER learns the typical behavior of known troll accounts and identifies more that behave similarly.
We show that using TROLLMAGNIFIER, one can grow the initial knowledge of potential trolls by over 300%.
arXiv Detail & Related papers (2021-12-01T12:10:24Z) - Sentiment Analysis for Troll Detection on Weibo [6.961253535504979]
In China, the micro-blogging service provider, Sina Weibo, is the most popular such service.
To influence public opinion, Weibo trolls can be hired to post deceptive comments.
In this paper, we focus on troll detection via sentiment analysis and other user activity data.
arXiv Detail & Related papers (2021-03-07T14:59:12Z) - TrollHunter [Evader]: Automated Detection [Evasion] of Twitter Trolls
During the COVID-19 Pandemic [1.5469452301122175]
TrollHunter is an automated reasoning mechanism used to hunt for trolls on Twitter during the COVID-19 pandemic in 2020.
To counter the COVID-19 infodemic, the TrollHunter leverages a unique linguistic analysis of a multi-dimensional set of Twitter content features.
TrollHunter achieved 98.5% accuracy, 75.4% precision and 69.8% recall over a dataset of 1.3 million tweets.
arXiv Detail & Related papers (2020-12-04T13:46:42Z) - Backdoor Attack against Speaker Verification [86.43395230456339]
We show that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data.
We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.
arXiv Detail & Related papers (2020-10-22T11:10:08Z) - Trawling for Trolling: A Dataset [56.1778095945542]
We present a dataset that models trolling as a subcategory of offensive content.
The dataset has 12,490 samples, split across 5 classes; Normal, Profanity, Trolling, Derogatory and Hate Speech.
arXiv Detail & Related papers (2020-08-02T17:23:55Z) - Russian trolls speaking Russian: Regional Twitter operations and MH17 [68.8204255655161]
In 2018, Twitter released data on accounts identified as Russian trolls.
We analyze the Russian-language operations of these trolls.
We find that trolls' information campaign on the MH17 crash was the largest in terms of tweet count.
arXiv Detail & Related papers (2020-05-13T19:48:12Z) - Detecting Troll Behavior via Inverse Reinforcement Learning: A Case
Study of Russian Trolls in the 2016 US Election [8.332032237125897]
We propose an approach based on Inverse Reinforcement Learning (IRL) to capture troll behavior and identify troll accounts.
As a study case, we consider the troll accounts identified by the US Congress during the investigation of Russian meddling in the 2016 US Presidential election.
We report promising results: the IRL-based approach is able to accurately detect troll accounts.
arXiv Detail & Related papers (2020-01-28T19:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.