Artificial Intelligence in Election Campaigns: Perceptions, Penalties, and Implications
- URL: http://arxiv.org/abs/2408.12613v2
- Date: Mon, 19 May 2025 16:05:15 GMT
- Title: Artificial Intelligence in Election Campaigns: Perceptions, Penalties, and Implications
- Authors: Andreas Jungherr, Adrian Rauchfleisch, Alexander Wuttke,
- Abstract summary: We identify three categories of AI use -- campaign operations, voter outreach, and deception.<n>While people generally dislike AI in campaigns, they are especially critical of deceptive uses, which they perceive as norm violations.<n>Deception AI use increases public support for stricter AI regulation, including calls for an outright ban on AI development.
- Score: 44.99833362998488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As political parties around the world experiment with Artificial Intelligence (AI) in election campaigns, concerns about deception and manipulation are rising. This article examines how the public reacts to different uses of AI in elections and the potential consequences for party evaluations and regulatory preferences. Across three preregistered studies with over 7,600 American respondents, we identify three categories of AI use -- campaign operations, voter outreach, and deception. While people generally dislike AI in campaigns, they are especially critical of deceptive uses, which they perceive as norm violations. However, parties engaging in AI-enabled deception face no significant drop in favorability, neither with supporters nor opponents. Instead, deceptive AI use increases public support for stricter AI regulation, including calls for an outright ban on AI development. These findings reveal a misalignment between public disapproval of deceptive AI and the political incentives of parties, underscoring the need for targeted regulatory oversight. Rather than banning AI in elections altogether, regulation should distinguish between harmful and beneficial applications to avoid stifling democratic innovation.
Related papers
- What do people expect from Artificial Intelligence? Public opinion on alignment in AI moderation from Germany and the United States [0.0]
We present evidence from two surveys of public preferences for key functional features of AI-enabled systems in Germany and the United States.
We examine support for four types of alignment in AI moderation: accuracy and reliability, safety, bias mitigation, and the promotion of aspirational imaginaries.
In both countries, accuracy and safety enjoy the strongest support, while more normatively charged goals -- like fairness and aspirational imaginaries -- receive more cautious backing.
arXiv Detail & Related papers (2025-04-16T20:27:03Z) - Artificial intelligence and democracy: Towards digital authoritarianism or a democratic upgrade? [0.0]
The impact of Artificial Intelligence on democracy is a complex issue that requires thorough research and careful regulation.
New types of online campaigns, driven by AI applications, are replacing traditional ones.
The potential for manipulating voters and indirectly influencing the electoral outcome should not be underestimated.
arXiv Detail & Related papers (2025-03-30T06:43:54Z) - Almost AI, Almost Human: The Challenge of Detecting AI-Polished Writing [55.2480439325792]
This study systematically evaluations twelve state-of-the-art AI-text detectors using our AI-Polished-Text Evaluation dataset.<n>Our findings reveal that detectors frequently flag even minimally polished text as AI-generated, struggle to differentiate between degrees of AI involvement, and exhibit biases against older and smaller models.
arXiv Detail & Related papers (2025-02-21T18:45:37Z) - Digital Democracy in the Age of Artificial Intelligence [0.16385815610837165]
This chapter explores the influence of Artificial Intelligence (AI) on digital democracy.<n>It focuses on four main areas: citizenship, participation, representation, and the public sphere.
arXiv Detail & Related papers (2024-11-26T10:20:53Z) - How will advanced AI systems impact democracy? [16.944248678780614]
We discuss the impacts that generative artificial intelligence may have on democratic processes.
We ask how AI might be used to destabilise or support democratic mechanisms like elections.
Finally, we discuss whether AI will strengthen or weaken democratic principles.
arXiv Detail & Related papers (2024-08-27T12:05:59Z) - Navigating AI Fallibility: Examining People's Reactions and Perceptions of AI after Encountering Personality Misrepresentations [7.256711790264119]
Hyper-personalized AI systems profile people's characteristics to provide personalized recommendations.
These systems are not immune to errors when making inferences about people's most personal traits.
We present two studies to examine how people react and perceive AI after encountering personality misrepresentations.
arXiv Detail & Related papers (2024-05-25T21:27:15Z) - AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance [18.290959557311552]
Public sector use of AI has been on the rise for the past decade, but only recently have efforts to enter it entered the cultural zeitgeist.
While simple to articulate, promoting ethical and effective roll outs of AI systems in government is a notoriously elusive task.
arXiv Detail & Related papers (2024-04-23T01:45:38Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - AI Deception: A Survey of Examples, Risks, and Potential Solutions [20.84424818447696]
This paper argues that a range of current AI systems have learned how to deceive humans.
We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth.
arXiv Detail & Related papers (2023-08-28T17:59:35Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - AI and Core Electoral Processes: Mapping the Horizons [3.420467786581458]
We consider five representative avenues within the core electoral process which have potential for AI usage.
These five avenues are: voter list maintenance, determining polling booth locations, polling booth protection processes, voter authentication and video monitoring of elections.
arXiv Detail & Related papers (2023-02-07T22:06:24Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.