Democratic AI is Possible. The Democracy Levels Framework Shows How It Might Work
- URL: http://arxiv.org/abs/2411.09222v3
- Date: Wed, 18 Jun 2025 18:16:01 GMT
- Title: Democratic AI is Possible. The Democracy Levels Framework Shows How It Might Work
- Authors: Aviv Ovadya, Kyle Redman, Luke Thorburn, Quan Ze Chen, Oliver Smith, Flynn Devine, Andrew Konya, Smitha Milli, Manon Revel, K. J. Kevin Feng, Amy X. Zhang, Bilva Chandra, Michiel A. Bakker, Atoosa Kasirzadeh,
- Abstract summary: This position paper argues that effectively "democratizing AI" requires democratic governance and alignment of AI.<n>We provide a "Democracy Levels" framework and associated tools to explore what increasingly democratic AI might look like.
- Score: 10.45161883458636
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This position paper argues that effectively "democratizing AI" requires democratic governance and alignment of AI, and that this is particularly valuable for decisions with systemic societal impacts. Initial steps -- such as Meta's Community Forums and Anthropic's Collective Constitutional AI -- have illustrated a promising direction, where democratic processes could be used to meaningfully improve public involvement and trust in critical decisions. To more concretely explore what increasingly democratic AI might look like, we provide a "Democracy Levels" framework and associated tools that: (i) define milestones toward meaningfully democratic AI, which is also crucial for substantively pluralistic, human-centered, participatory, and public-interest AI, (ii) can help guide organizations seeking to increase the legitimacy of their decisions on difficult AI governance and alignment questions, and (iii) support the evaluation of such efforts.
Related papers
- Advancing Science- and Evidence-based AI Policy [163.43609502905707]
This paper tackles the problem of how to optimize the relationship between evidence and policy to address the opportunities and challenges of AI.<n>An increasing number of efforts address this problem by often either (i) contributing research into the risks of AI and their effective mitigation or (ii) advocating for policy to address these risks.
arXiv Detail & Related papers (2025-08-02T23:20:58Z) - Aligning Trustworthy AI with Democracy: A Dual Taxonomy of Opportunities and Risks [26.916552909766118]
This paper introduces a dual taxonomy to evaluate AI's complex relationship with democracy.<n>TheAIRD taxonomy identifies how AI can undermine core democratic principles such as autonomy, fairness, and trust.<n>The AIPD taxonomy highlights AI's potential to enhance transparency, participation, efficiency, and evidence-based policymaking.
arXiv Detail & Related papers (2025-05-19T10:51:08Z) - Artificial intelligence and democracy: Towards digital authoritarianism or a democratic upgrade? [0.0]
The impact of Artificial Intelligence on democracy is a complex issue that requires thorough research and careful regulation.<n>New types of online campaigns, driven by AI applications, are replacing traditional ones.<n>The potential for manipulating voters and indirectly influencing the electoral outcome should not be underestimated.
arXiv Detail & Related papers (2025-03-30T06:43:54Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.
Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Democratizing AI Governance: Balancing Expertise and Public Participation [1.0878040851638]
The development and deployment of artificial intelligence (AI) systems, with their profound societal impacts, raise critical challenges for governance.
This article explores the tension between expert-led oversight and democratic participation, analyzing models of participatory and deliberative democracy.
Recommendations are provided for integrating these approaches into a balanced governance model tailored to the European Union.
arXiv Detail & Related papers (2025-01-16T17:47:33Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - From Experts to the Public: Governing Multimodal Language Models in Politically Sensitive Video Analysis [48.14390493099495]
This paper examines the governance of large language models (MM-LLMs) through individual and collective deliberation.
We conducted a two-step study: first, interviews with 10 journalists established a baseline understanding of expert video interpretation; second, 114 individuals from the general public engaged in deliberation using Inclusive.AI.
arXiv Detail & Related papers (2024-09-15T03:17:38Z) - How will advanced AI systems impact democracy? [16.944248678780614]
We discuss the impacts that generative artificial intelligence may have on democratic processes.
We ask how AI might be used to destabilise or support democratic mechanisms like elections.
Finally, we discuss whether AI will strengthen or weaken democratic principles.
arXiv Detail & Related papers (2024-08-27T12:05:59Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Public Constitutional AI [0.0]
We are increasingly subjected to the power of AI authorities.
How can we ensure AI systems have the legitimacy necessary for effective governance?
This essay argues that to secure AI legitimacy, we need methods that engage the public in designing and constraining AI systems.
arXiv Detail & Related papers (2024-06-24T15:00:01Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - A multilevel framework for AI governance [6.230751621285321]
We propose a multilevel governance approach that involves governments, corporations, and citizens.
The levels of governance combined with the dimensions of trust in AI provide practical insights that can be used to further enhance user experiences and inform public policy related to AI.
arXiv Detail & Related papers (2023-07-04T03:59:16Z) - Democratising AI: Multiple Meanings, Goals, and Methods [0.0]
Numerous parties are calling for the democratisation of AI, but the phrase is used to refer to a variety of goals, the pursuit of which sometimes conflict.
This paper identifies four kinds of AI democratisation that are commonly discussed.
Main takeaway is that AI democratisation is a multifarious and sometimes conflicting concept.
arXiv Detail & Related papers (2023-03-22T15:23:22Z) - Putting AI Ethics into Practice: The Hourglass Model of Organizational
AI Governance [0.0]
We present an AI governance framework, which targets organizations that develop and use AI systems.
The framework is designed to help organizations deploying AI systems translate ethical AI principles into practice.
arXiv Detail & Related papers (2022-06-01T08:55:27Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.