Justifications for Democratizing AI Alignment and Their Prospects
- URL: http://arxiv.org/abs/2507.19548v1
- Date: Thu, 24 Jul 2025 17:16:19 GMT
- Title: Justifications for Democratizing AI Alignment and Their Prospects
- Authors: André Steingrüber, Kevin Baum,
- Abstract summary: We argue that democratic approaches aim to fill a justificatory gap that democratic approaches aim to fill through political rather than theoretical justification.<n>We identify significant challenges for democratic approaches, particularly regarding the prevention of illegitimate coercion through AI alignment.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The AI alignment problem comprises both technical and normative dimensions. While technical solutions focus on implementing normative constraints in AI systems, the normative problem concerns determining what these constraints should be. This paper examines justifications for democratic approaches to the normative problem -- where affected stakeholders determine AI alignment -- as opposed to epistocratic approaches that defer to normative experts. We analyze both instrumental justifications (democratic approaches produce better outcomes) and non-instrumental justifications (democratic approaches prevent illegitimate authority or coercion). We argue that normative and metanormative uncertainty create a justificatory gap that democratic approaches aim to fill through political rather than theoretical justification. However, we identify significant challenges for democratic approaches, particularly regarding the prevention of illegitimate coercion through AI alignment. Our analysis suggests that neither purely epistocratic nor purely democratic approaches may be sufficient on their own, pointing toward hybrid frameworks that combine expert judgment with participatory input alongside institutional safeguards against AI monopolization.
Related papers
- Aligning Trustworthy AI with Democracy: A Dual Taxonomy of Opportunities and Risks [26.916552909766118]
This paper introduces a dual taxonomy to evaluate AI's complex relationship with democracy.<n>TheAIRD taxonomy identifies how AI can undermine core democratic principles such as autonomy, fairness, and trust.<n>The AIPD taxonomy highlights AI's potential to enhance transparency, participation, efficiency, and evidence-based policymaking.
arXiv Detail & Related papers (2025-05-19T10:51:08Z) - Achieving Socio-Economic Parity through the Lens of EU AI Act [11.550643687258738]
Unfair treatment and discrimination are critical ethical concerns in AI systems.<n>The recent introduction of the EU AI Act establishes a unified legal framework to ensure legal certainty for AI innovation and investment.<n>We propose a novel fairness notion, Socio-Economic Parity (SEP), which incorporates Socio-Economic Status (SES) and promotes positive actions for underprivileged groups.
arXiv Detail & Related papers (2025-03-29T12:27:27Z) - Democratizing AI Governance: Balancing Expertise and Public Participation [1.0878040851638]
The development and deployment of artificial intelligence (AI) systems, with their profound societal impacts, raise critical challenges for governance.<n>This article explores the tension between expert-led oversight and democratic participation, analyzing models of participatory and deliberative democracy.<n> Recommendations are provided for integrating these approaches into a balanced governance model tailored to the European Union.
arXiv Detail & Related papers (2025-01-16T17:47:33Z) - Democratic AI is Possible. The Democracy Levels Framework Shows How It Might Work [10.45161883458636]
This position paper argues that effectively "democratizing AI" requires democratic governance and alignment of AI.<n>We provide a "Democracy Levels" framework and associated tools to explore what increasingly democratic AI might look like.
arXiv Detail & Related papers (2024-11-14T06:37:45Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Aligning AI with Public Values: Deliberation and Decision-Making for Governing Multimodal LLMs in Political Video Analysis [48.14390493099495]
How AI models should deal with political topics has been discussed, but it remains challenging and requires better governance.<n>This paper examines the governance of large language models through individual and collective deliberation, focusing on politically sensitive videos.
arXiv Detail & Related papers (2024-09-15T03:17:38Z) - Open Problems in Technical AI Governance [102.19067750759471]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.<n>This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - Public Constitutional AI [0.0]
We are increasingly subjected to the power of AI authorities.<n>How can we ensure AI systems have the legitimacy necessary for effective governance?<n>This essay argues that to secure AI legitimacy, we need methods that engage the public in designing and constraining AI systems.
arXiv Detail & Related papers (2024-06-24T15:00:01Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Contestable Black Boxes [10.552465253379134]
This paper investigates the type of assurances that are needed in the contesting process when algorithmic black-boxes are involved.
We argue that specialised complementary methodologies to evaluate automated decision-making in the case of a particular decision being contested need to be developed.
arXiv Detail & Related papers (2020-06-09T09:09:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.