Lost in Translation: Policymakers are not really listening to Citizen Concerns about AI
- URL: http://arxiv.org/abs/2510.20568v1
- Date: Thu, 23 Oct 2025 13:57:02 GMT
- Title: Lost in Translation: Policymakers are not really listening to Citizen Concerns about AI
- Authors: Susan Ariel Aaronson, Michael Moreno,
- Abstract summary: Governments are inviting public comment on AI, but as they translate input into policy, much of what citizens say is lost.<n>This paper compares three countries, Australia, Colombia, and the United States, that invited citizens to comment on AI risks and policies.<n>In each nation, fewer than one percent of the population participated.<n>Officials showed limited responsiveness to the feedback they received, failing to create an effective feedback loop.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The worlds people have strong opinions about artificial intelligence (AI), and they want policymakers to listen. Governments are inviting public comment on AI, but as they translate input into policy, much of what citizens say is lost. Policymakers are missing a critical opportunity to build trust in AI and its governance. This paper compares three countries, Australia, Colombia, and the United States, that invited citizens to comment on AI risks and policies. Using a landscape analysis, the authors examined how each government solicited feedback and whether that input shaped governance. Yet in none of the three cases did citizens and policymakers establish a meaningful dialogue. Governments did little to attract diverse voices or publicize calls for comment, leaving most citizens unaware or unprepared to respond. In each nation, fewer than one percent of the population participated. Moreover, officials showed limited responsiveness to the feedback they received, failing to create an effective feedback loop. The study finds a persistent gap between the promise and practice of participatory AI governance. The authors conclude that current approaches are unlikely to build trust or legitimacy in AI because policymakers are not adequately listening or responding to public concerns. They offer eight recommendations: promote AI literacy; monitor public feedback; broaden outreach; hold regular online forums; use innovative engagement methods; include underrepresented groups; respond publicly to input; and make participation easier.
Related papers
- "We are not Future-ready": Understanding AI Privacy Risks and Existing Mitigation Strategies from the Perspective of AI Developers in Europe [56.1653658714305]
We interviewed 25 AI developers based in Europe to understand which privacy threats they believe pose the greatest risk to users, developers, and businesses.<n>We find that there is little consensus among AI developers on the relative ranking of privacy risks.<n>While AI developers are aware of proposed mitigation strategies for addressing these risks, they reported minimal real-world adoption.
arXiv Detail & Related papers (2025-10-01T13:51:33Z) - Advancing Science- and Evidence-based AI Policy [163.43609502905707]
This paper tackles the problem of how to optimize the relationship between evidence and policy to address the opportunities and challenges of AI.<n>An increasing number of efforts address this problem by often either (i) contributing research into the risks of AI and their effective mitigation or (ii) advocating for policy to address these risks.
arXiv Detail & Related papers (2025-08-02T23:20:58Z) - What do people expect from Artificial Intelligence? Public opinion on alignment in AI moderation from Germany and the United States [0.0]
We present evidence from two surveys of public preferences for key functional features of AI-enabled systems in Germany and the United States.<n>We examine support for four types of alignment in AI moderation: accuracy and reliability, safety, bias mitigation, and the promotion of aspirational imaginaries.<n>In both countries, accuracy and safety enjoy the strongest support, while more normatively charged goals -- like fairness and aspirational imaginaries -- receive more cautious backing.
arXiv Detail & Related papers (2025-04-16T20:27:03Z) - Artificial Intelligence in Deliberation: The AI Penalty and the Emergence of a New Deliberative Divide [0.0]
Digital deliberation has expanded democratic participation, yet challenges remain.<n>Recent advances in artificial intelligence (AI) offer potential solutions, but public perceptions of AI's role in deliberation remain underexplored.<n>If AI is integrated into deliberation, public trust, acceptance, and willingness to participate may be affected.
arXiv Detail & Related papers (2025-03-10T16:33:15Z) - Aligning AI with Public Values: Deliberation and Decision-Making for Governing Multimodal LLMs in Political Video Analysis [48.14390493099495]
How AI models should deal with political topics has been discussed, but it remains challenging and requires better governance.<n>This paper examines the governance of large language models through individual and collective deliberation, focusing on politically sensitive videos.
arXiv Detail & Related papers (2024-09-15T03:17:38Z) - How will advanced AI systems impact democracy? [16.944248678780614]
We discuss the impacts that generative artificial intelligence may have on democratic processes.
We ask how AI might be used to destabilise or support democratic mechanisms like elections.
Finally, we discuss whether AI will strengthen or weaken democratic principles.
arXiv Detail & Related papers (2024-08-27T12:05:59Z) - Artificial Intelligence in Election Campaigns: Perceptions, Penalties, and Implications [44.99833362998488]
We identify three categories of AI use -- campaign operations, voter outreach, and deception.<n>While people generally dislike AI in campaigns, they are especially critical of deceptive uses, which they perceive as norm violations.<n>Deception AI use increases public support for stricter AI regulation, including calls for an outright ban on AI development.
arXiv Detail & Related papers (2024-08-08T12:58:20Z) - AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance [18.290959557311552]
Public sector use of AI has been on the rise for the past decade, but only recently have efforts to enter it entered the cultural zeitgeist.
While simple to articulate, promoting ethical and effective roll outs of AI systems in government is a notoriously elusive task.
arXiv Detail & Related papers (2024-04-23T01:45:38Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.