Detecting Propagators of Disinformation on Twitter Using Quantitative
Discursive Analysis
- URL: http://arxiv.org/abs/2210.05760v1
- Date: Tue, 11 Oct 2022 20:11:50 GMT
- Title: Detecting Propagators of Disinformation on Twitter Using Quantitative
Discursive Analysis
- Authors: Mark M. Bailey
- Abstract summary: This study presents a method of identifying Russian disinformation bots on Twitter using resonance centering analysis and Clauset-Newman-Moore community detection.
The data reflect a significant degree of discursive dissimilarity between known Russian disinformation bots and a control set of Twitter users during the timeframe of the 2016 U.S. Presidential Election.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Efforts by foreign actors to influence public opinion have gained
considerable attention because of their potential to impact democratic
elections. Thus, the ability to identify and counter sources of disinformation
is increasingly becoming a top priority for government entities in order to
protect the integrity of democratic processes. This study presents a method of
identifying Russian disinformation bots on Twitter using centering resonance
analysis and Clauset-Newman-Moore community detection. The data reflect a
significant degree of discursive dissimilarity between known Russian
disinformation bots and a control set of Twitter users during the timeframe of
the 2016 U.S. Presidential Election. The data also demonstrate statistically
significant classification capabilities (MCC = 0.9070) based on community
clustering. The prediction algorithm is very effective at identifying true
positives (bots), but is not able to resolve true negatives (non-bots) because
of the lack of discursive similarity between control users. This leads to a
highly sensitive means of identifying propagators of disinformation with a high
degree of discursive similarity on Twitter, with implications for limiting the
spread of disinformation that could impact democratic processes.
Related papers
- On the Use of Proxies in Political Ad Targeting [49.61009579554272]
We show that major political advertisers circumvented mitigations by targeting proxy attributes.
Our findings have crucial implications for the ongoing discussion on the regulation of political advertising.
arXiv Detail & Related papers (2024-10-18T17:15:13Z) - MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Detecting Political Opinions in Tweets through Bipartite Graph Analysis:
A Skip Aggregation Graph Convolution Approach [9.350629400940493]
We focus on the 2020 US presidential election and create a large-scale dataset from Twitter.
To detect political opinions in tweets, we build a user-tweet bipartite graph based on users' posting and retweeting behaviors.
We introduce a novel skip aggregation mechanism that makes tweet nodes aggregate information from second-order neighbors.
arXiv Detail & Related papers (2023-04-22T10:38:35Z) - Design and analysis of tweet-based election models for the 2021 Mexican
legislative election [55.41644538483948]
We use a dataset of 15 million election-related tweets in the six months preceding election day.
We find that models using data with geographical attributes determine the results of the election with better precision and accuracy than conventional polling methods.
arXiv Detail & Related papers (2023-01-02T12:40:05Z) - Machine Learning-based Automatic Annotation and Detection of COVID-19
Fake News [8.020736472947581]
COVID-19 impacted every part of the world, although the misinformation about the outbreak traveled faster than the virus.
Existing work neglects the presence of bots that act as a catalyst in the spread.
We propose an automated approach for labeling data using verified fact-checked statements on a Twitter dataset.
arXiv Detail & Related papers (2022-09-07T13:55:59Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Defending Democracy: Using Deep Learning to Identify and Prevent
Misinformation [0.0]
This study classifies and visualizes the spread of misinformation on a social media network using publicly available Twitter data.
The study further demonstrates the suitability of BERT for providing a scalable model for false information detection.
arXiv Detail & Related papers (2021-06-03T16:34:54Z) - Leveraging Administrative Data for Bias Audits: Assessing Disparate
Coverage with Mobility Data for COVID-19 Policy [61.60099467888073]
We show how linking administrative data can enable auditing mobility data for bias.
We show that older and non-white voters are less likely to be captured by mobility data.
We show that allocating public health resources based on such mobility data could disproportionately harm high-risk elderly and minority groups.
arXiv Detail & Related papers (2020-11-14T02:04:14Z) - How Twitter Data Sampling Biases U.S. Voter Behavior Characterizations [6.364128212193265]
Recent studies reveal the existence of inauthentic actors such as malicious social bots and trolls.
In this paper, we aim to close this gap using Twitter data from the 2018 U.S. midterm elections.
We show that hyperactive accounts are more likely to exhibit various suspicious behaviors and share low-credibility information.
arXiv Detail & Related papers (2020-06-02T08:33:30Z) - Automatic Detection of Influential Actors in Disinformation Networks [0.0]
This paper presents an end-to-end framework to automate detection of disinformation narratives, networks, and influential actors.
System detects IO accounts with 96% precision, 79% recall, and 96% area-under-the-PR-curve.
Results are corroborated with independent sources of known IO accounts from U.S. Congressional reports, investigative journalism, and IO datasets provided by Twitter.
arXiv Detail & Related papers (2020-05-21T20:15:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.