Charting the Landscape of Nefarious Uses of Generative Artificial Intelligence for Online Election Interference
- URL: http://arxiv.org/abs/2406.01862v4
- Date: Wed, 30 Oct 2024 05:29:34 GMT
- Title: Charting the Landscape of Nefarious Uses of Generative Artificial Intelligence for Online Election Interference
- Authors: Emilio Ferrara,
- Abstract summary: This paper explores the nefarious applications of GenAI, highlighting their potential to disrupt democratic processes.
Malicious actors exploit these technologies to try influencing voter behavior, spread disinformation, and undermine public trust in electoral systems.
- Score: 11.323961700172175
- License:
- Abstract: Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) pose significant risks, particularly in the realm of online election interference. This paper explores the nefarious applications of GenAI, highlighting their potential to disrupt democratic processes through deepfakes, botnets, targeted misinformation campaigns, and synthetic identities. By examining recent case studies and public incidents, we illustrate how malicious actors exploit these technologies to try influencing voter behavior, spread disinformation, and undermine public trust in electoral systems. The paper also discusses the societal implications of these threats, emphasizing the urgent need for robust mitigation strategies and international cooperation to safeguard democratic integrity.
Related papers
- Transparency, Security, and Workplace Training & Awareness in the Age of Generative AI [0.0]
As AI technologies advance, ethical considerations, transparency, data privacy, and their impact on human labor intersect with the drive for innovation and efficiency.
Our research explores publicly accessible large language models (LLMs) that often operate on the periphery, away from mainstream scrutiny.
Specifically, we examine Gab AI, a platform that centers around unrestricted communication and privacy, allowing users to interact freely without censorship.
arXiv Detail & Related papers (2024-12-19T17:40:58Z) - Digital Democracy in the Age of Artificial Intelligence [0.16385815610837165]
This chapter explores the influence of Artificial Intelligence (AI) on digital democracy.
It focuses on four main areas: citizenship, participation, representation, and the public sphere.
arXiv Detail & Related papers (2024-11-26T10:20:53Z) - Cyber Threats to Canadian Federal Election: Emerging Threats, Assessment, and Mitigation Strategies [2.04903126350824]
Recent foreign interference in elections globally highlight the increasing sophistication of adversaries in exploiting technical and human vulnerabilities.
To mitigate these vulnerabilities, a threat assessment is crucial to identify emerging threats, develop incident response capabilities, and build public trust and resilience against cyber threats.
The research identifies three major threats: misinformation, disinformation, and malinformation (MDM) campaigns; attacks on critical infrastructure and election support systems; and espionage by malicious actors.
arXiv Detail & Related papers (2024-10-07T23:40:40Z) - From Experts to the Public: Governing Multimodal Language Models in Politically Sensitive Video Analysis [48.14390493099495]
This paper examines the governance of large language models (MM-LLMs) through individual and collective deliberation.
We conducted a two-step study: first, interviews with 10 journalists established a baseline understanding of expert video interpretation; second, 114 individuals from the general public engaged in deliberation using Inclusive.AI.
arXiv Detail & Related papers (2024-09-15T03:17:38Z) - Deceptive uses of Artificial Intelligence in elections strengthen support for AI ban [44.99833362998488]
We propose a framework for assessing AI's impact on elections.
We group AI-enabled campaigning uses into three categories -- campaign operations, voter outreach, and deception.
We provide the first systematic evidence from a preregistered representative survey.
arXiv Detail & Related papers (2024-08-08T12:58:20Z) - Mapping the individual, social, and biospheric impacts of Foundation Models [0.39843531413098965]
This paper offers a critical framework to account for the social, political, and environmental dimensions of foundation models and generative AI.
We identify 14 categories of risks and harms and map them according to their individual, social, and biospheric impacts.
arXiv Detail & Related papers (2024-07-24T10:05:40Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models [7.835719708227145]
Deepfakes and the spread of m/disinformation have emerged as formidable threats to the integrity of information ecosystems worldwide.
We highlight the mechanisms through which generative AI based on large models (LM-based GenAI) craft seemingly convincing yet fabricated contents.
We introduce an integrated framework that combines advanced detection algorithms, cross-platform collaboration, and policy-driven initiatives.
arXiv Detail & Related papers (2023-11-29T06:47:58Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Factuality Challenges in the Era of Large Language Models [113.3282633305118]
Large Language Models (LLMs) generate false, erroneous, or misleading content.
LLMs can be exploited for malicious applications.
This poses a significant challenge to society in terms of the potential deception of users.
arXiv Detail & Related papers (2023-10-08T14:55:02Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.