Generative Propaganda
- URL: http://arxiv.org/abs/2509.19147v1
- Date: Tue, 23 Sep 2025 15:27:00 GMT
- Title: Generative Propaganda
- Authors: Madeleine I. G. Daepp, Alejandro Cuevas, Robert Osazuwa Ness, Vickie Yu-Ping Wang, Bharat Kumar Nayak, Dibyendu Mishra, Ti-Chung Cheng, Shaily Desai, Joyojeet Pal,
- Abstract summary: Generative propaganda is the use of generative artificial intelligence to shape public opinion.<n>The term "deepfakes" exerts outsized discursive power in shaping defenders' expectations of misuse.<n>Deception was neither the main driver nor the main impact vector of AI's use.
- Score: 36.28140423485487
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative propaganda is the use of generative artificial intelligence (AI) to shape public opinion. To characterize its use in real-world settings, we conducted interviews with defenders (e.g., factcheckers, journalists, officials) in Taiwan and creators (e.g., influencers, political consultants, advertisers) as well as defenders in India, centering two places characterized by high levels of online propaganda. The term "deepfakes", we find, exerts outsized discursive power in shaping defenders' expectations of misuse and, in turn, the interventions that are prioritized. To better characterize the space of generative propaganda, we develop a taxonomy that distinguishes between obvious versus hidden and promotional versus derogatory use. Deception was neither the main driver nor the main impact vector of AI's use; instead, Indian creators sought to persuade rather than to deceive, often making AI's use obvious in order to reduce legal and reputational risks, while Taiwan's defenders saw deception as a subset of broader efforts to distort the prevalence of strategic narratives online. AI was useful and used, however, in producing efficiency gains in communicating across languages and modes, and in evading human and algorithmic detection. Security researchers should reconsider threat models to clearly differentiate deepfakes from promotional and obvious uses, to complement and bolster the social factors that constrain misuse by internal actors, and to counter efficiency gains globally.
Related papers
- How cyborg propaganda reshapes collective action [7.802095759784913]
A distinct threat to democracy is emerging via partisan coordination apps and artificial intelligence-what we term 'cyborg propaganda'<n>This architecture combines verified humans with adaptive algorithmic automation, enabling a closed-loop system.<n>We argue that cyborg propaganda fundamentally alters the digital public square, shifting political discourse from a democratic contest of individual ideas to a battle of algorithmic campaigns.
arXiv Detail & Related papers (2026-02-13T16:49:26Z) - Can Media Act as a Soft Regulator of Safe AI Development? A Game Theoretical Analysis [57.68073583427415]
We study whether media coverage has the potential to push AI creators into the production of safe products.<n>Our results reveal that media is indeed able to foster cooperation between creators and users, but not always.<n>By shaping public perception and holding developers accountable, media emerges as a powerful soft regulator.
arXiv Detail & Related papers (2025-09-02T12:13:34Z) - How Malicious AI Swarms Can Threaten Democracy: The Fusion of Agentic AI and LLMs Marks a New Frontier in Information Warfare [40.42844888224356]
Public opinion manipulation has entered a new phase, amplifying its roots in rhetoric and propaganda.<n>Advances in large language models (LLMs) and autonomous agents now let influence campaigns reach unprecedented scale and precision.<n>Researchers warn AI could foster mass manipulation.
arXiv Detail & Related papers (2025-05-18T13:33:37Z) - PropaInsight: Toward Deeper Understanding of Propaganda in Terms of Techniques, Appeals, and Intent [71.20471076045916]
Propaganda plays a critical role in shaping public opinion and fueling disinformation.<n>Propainsight systematically dissects propaganda into techniques, arousal appeals, and underlying intent.<n>Propagaze combines human-annotated data with high-quality synthetic data.
arXiv Detail & Related papers (2024-09-19T06:28:18Z) - Charting the Landscape of Nefarious Uses of Generative Artificial Intelligence for Online Election Interference [11.323961700172175]
This paper explores the nefarious applications of GenAI, highlighting their potential to disrupt democratic processes.<n>Malicious actors exploit these technologies to try influencing voter behavior, spread disinformation, and undermine public trust in electoral systems.
arXiv Detail & Related papers (2024-06-04T00:26:12Z) - Factuality Challenges in the Era of Large Language Models [113.3282633305118]
Large Language Models (LLMs) generate false, erroneous, or misleading content.
LLMs can be exploited for malicious applications.
This poses a significant challenge to society in terms of the potential deception of users.
arXiv Detail & Related papers (2023-10-08T14:55:02Z) - BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models [54.19289900203071]
The rise in popularity of text-to-image generative artificial intelligence has attracted widespread public interest.
We demonstrate that this technology can be attacked to generate content that subtly manipulates its users.
We propose a Backdoor Attack on text-to-image Generative Models (BAGM)
Our attack is the first to target three popular text-to-image generative models across three stages of the generative process.
arXiv Detail & Related papers (2023-07-31T08:34:24Z) - Analyzing the Strategy of Propaganda using Inverse Reinforcement
Learning: Evidence from the 2022 Russian Invasion of Ukraine [21.563820572163337]
The 2022 Russian invasion of Ukraine was accompanied by a large-scale, pro-Russian propaganda campaign on social media.
Here, we analyze the strategy of the Twitter community using an inverse reinforcement learning approach.
We show that bots respond predominantly to pro-invasion messages, while messages indicating opposition primarily elicit responses from humans.
arXiv Detail & Related papers (2023-07-24T13:35:18Z) - The Manipulation Problem: Conversational AI as a Threat to Epistemic
Agency [0.0]
The technology of Conversational AI has made significant advancements over the last eighteen months.
conversational agents are likely to be deployed in the near future that are designed to pursue targeted influence objectives.
Sometimes referred to as the "AI Manipulation Problem," the emerging risk is that consumers will unwittingly engage in real-time dialog with predatory AI agents.
arXiv Detail & Related papers (2023-06-19T04:09:16Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.