Carthago Delenda Est: Co-opetitive Indirect Information Diffusion Model
for Influence Operations on Online Social Media
- URL: http://arxiv.org/abs/2402.01905v2
- Date: Tue, 6 Feb 2024 16:40:00 GMT
- Title: Carthago Delenda Est: Co-opetitive Indirect Information Diffusion Model
for Influence Operations on Online Social Media
- Authors: Jwen Fai Low, Benjamin C. M. Fung, Farkhund Iqbal, and Claude Fachkha
- Abstract summary: We introduce Diluvsion, an agent-based model for contested information propagation efforts on Twitter-like social media.
We account for engagement metrics in influencing stance adoption, non-social tie spreading of information, neutrality as a stance that can be spread, and themes that are analogous to media's framing effect and are symbiotic with respect to stance propagation.
- Score: 6.236019068888737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For a state or non-state actor whose credibility is bankrupt, relying on bots
to conduct non-attributable, non-accountable, and
seemingly-grassroots-but-decentralized-in-actuality influence/information
operations (info ops) on social media can help circumvent the issue of trust
deficit while advancing its interests. Planning and/or defending against
decentralized info ops can be aided by computational simulations in lieu of
ethically-fraught live experiments on social media. In this study, we introduce
Diluvsion, an agent-based model for contested information propagation efforts
on Twitter-like social media. The model emphasizes a user's belief in an
opinion (stance) being impacted by the perception of potentially illusory
popular support from constant incoming floods of indirect information, floods
that can be cooperatively engineered in an uncoordinated manner by bots as they
compete to spread their stances. Our model, which has been validated against
real-world data, is an advancement over previous models because we account for
engagement metrics in influencing stance adoption, non-social tie spreading of
information, neutrality as a stance that can be spread, and themes that are
analogous to media's framing effect and are symbiotic with respect to stance
propagation. The strengths of the Diluvsion model are demonstrated in
simulations of orthodox info ops, e.g., maximizing adoption of one stance;
creating echo chambers; inducing polarization; and unorthodox info ops, e.g.,
simultaneous support of multiple stances as a Trojan horse tactic for the
dissemination of a theme.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - Cascade-based Echo Chamber Detection [16.35164446890934]
echo chambers in social media have been under considerable scrutiny.
We propose a probabilistic generative model that explains social media footprints.
We show how our model can improve accuracy in auxiliary predictive tasks, such as stance detection and prediction of future propagations.
arXiv Detail & Related papers (2022-08-09T09:30:38Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - Characterizing User Susceptibility to COVID-19 Misinformation on Twitter [40.0762273487125]
This study attempts to answer it who constitutes the population vulnerable to the online misinformation in the pandemic.
We distinguish different types of users, ranging from social bots to humans with various level of engagement with COVID-related misinformation.
We then identify users' online features and situational predictors that correlate with their susceptibility to COVID-19 misinformation.
arXiv Detail & Related papers (2021-09-20T13:31:15Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Information Consumption and Social Response in a Segregated Environment:
the Case of Gab [74.5095691235917]
This work provides a characterization of the interaction patterns within Gab around the COVID-19 topic.
We find that there are no strong statistical differences in the social response to questionable and reliable content.
Our results provide insights toward the understanding of coordinated inauthentic behavior and on the early-warning of information operation.
arXiv Detail & Related papers (2020-06-03T11:34:25Z) - Quantifying the Vulnerabilities of the Online Public Square to Adversarial Manipulation Tactics [43.98568073610101]
We use a social media model to quantify the impacts of several adversarial manipulation tactics on the quality of content.
We find that the presence of influential accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation.
These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
arXiv Detail & Related papers (2019-07-13T21:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.