Aligning AI with Public Values: Deliberation and Decision-Making for Governing Multimodal LLMs in Political Video Analysis
- URL: http://arxiv.org/abs/2410.01817v2
- Date: Tue, 22 Jul 2025 16:07:13 GMT
- Title: Aligning AI with Public Values: Deliberation and Decision-Making for Governing Multimodal LLMs in Political Video Analysis
- Authors: Tanusree Sharma, Yujin Potter, Zachary Kilhoffer, Yun Huang, Dawn Song, Yang Wang,
- Abstract summary: How AI models should deal with political topics has been discussed, but it remains challenging and requires better governance.<n>This paper examines the governance of large language models through individual and collective deliberation, focusing on politically sensitive videos.
- Score: 48.14390493099495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How AI models should deal with political topics has been discussed, but it remains challenging and requires better governance. This paper examines the governance of large language models through individual and collective deliberation, focusing on politically sensitive videos. We conducted a two-step study: interviews with 10 journalists established a baseline understanding of expert video interpretation; 114 individuals through deliberation using InclusiveAI, a platform that facilitates democratic decision-making through decentralized autonomous organization (DAO) mechanisms. Our findings reveal distinct differences in interpretative priorities: while experts emphasized emotion and narrative, the general public prioritized factual clarity, objectivity, and emotional neutrality. Furthermore, we examined how different governance mechanisms - quadratic vs. weighted voting and equal vs. 20/80 voting power - shape users' decision-making regarding AI behavior. Results indicate that voting methods significantly influence outcomes, with quadratic voting reinforcing perceptions of liberal democracy and political equality. Our study underscores the necessity of selecting appropriate governance mechanisms to better capture user perspectives and suggests decentralized AI governance as a potential way to facilitate broader public engagement in AI development, ensuring that varied perspectives meaningfully inform design decisions.
Related papers
- Aligning Trustworthy AI with Democracy: A Dual Taxonomy of Opportunities and Risks [26.916552909766118]
This paper introduces a dual taxonomy to evaluate AI's complex relationship with democracy.<n>TheAIRD taxonomy identifies how AI can undermine core democratic principles such as autonomy, fairness, and trust.<n>The AIPD taxonomy highlights AI's potential to enhance transparency, participation, efficiency, and evidence-based policymaking.
arXiv Detail & Related papers (2025-05-19T10:51:08Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.
Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Measuring Political Preferences in AI Systems: An Integrative Approach [0.0]
This study employs a multi-method approach to assess political bias in leading AI systems.<n>Results indicate a consistent left-leaning bias across most contemporary AI systems.<n>The presence of systematic political bias in AI systems poses risks, including reduced viewpoint diversity, increased societal polarization, and the potential for public mistrust in AI technologies.
arXiv Detail & Related papers (2025-03-04T01:40:28Z) - AI and the Transformation of Accountability and Discretion in Urban Governance [1.9152655229960793]
The study synthesizes insights to propose guiding principles for responsible AI integration in decision-making processes.<n>The analysis argues that AI does not simply restrict or enhance discretion but redistributes it across institutional levels.<n>It may simultaneously strengthen managerial oversight, enhance decision-making consistency, and improve operational efficiency.
arXiv Detail & Related papers (2025-02-18T18:11:39Z) - Democratizing AI Governance: Balancing Expertise and Public Participation [1.0878040851638]
The development and deployment of artificial intelligence (AI) systems, with their profound societal impacts, raise critical challenges for governance.<n>This article explores the tension between expert-led oversight and democratic participation, analyzing models of participatory and deliberative democracy.<n> Recommendations are provided for integrating these approaches into a balanced governance model tailored to the European Union.
arXiv Detail & Related papers (2025-01-16T17:47:33Z) - Digital Democracy in the Age of Artificial Intelligence [0.16385815610837165]
This chapter explores the influence of Artificial Intelligence (AI) on digital democracy.
It focuses on four main areas: citizenship, participation, representation, and the public sphere.
arXiv Detail & Related papers (2024-11-26T10:20:53Z) - Toward Democracy Levels for AI [4.048639768405042]
We provide a "Democracy Levels" framework for evaluating the degree to which decisions in a given domain are made democratically.
The framework can be used (i) to define in a roadmap for the democratic AI, pluralistic AI, and public AI ecosystems, (ii) to guide organizations that need to increase the legitimacy of their decisions on difficult AI governance questions, and (iii) as a rubric by those aiming to evaluate AI organizations and keep them accountable.
arXiv Detail & Related papers (2024-11-14T06:37:45Z) - Biased AI can Influence Political Decision-Making [64.9461133083473]
This paper presents two experiments investigating the effects of partisan bias in AI language models on political decision-making.
We found that participants exposed to politically biased models were significantly more likely to adopt opinions and make decisions aligning with the AI's bias.
arXiv Detail & Related papers (2024-10-08T22:56:00Z) - Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Generative AI Voting: Fair Collective Choice is Resilient to LLM Biases and Inconsistencies [21.444936180683147]
We show for the first time in real-world a proportional representation of voters in direct democracy.
We also show that fair ballot aggregation methods, such as equal shares, prove to be a win-win: fairer voting outcomes for humans with fairer AI representation.
arXiv Detail & Related papers (2024-05-31T01:41:48Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Key Factors Affecting European Reactions to AI in European Full and
Flawed Democracies [1.104960878651584]
This study examines the key factors that affect European reactions to artificial intelligence (AI) in the context of full and flawed democracies in Europe.
It is observed that flawed democracies tend to exhibit higher levels of trust in government entities compared to their counterparts in full democracies.
Individuals residing in flawed democracies demonstrate a more positive attitude toward AI when compared to respondents from full democracies.
arXiv Detail & Related papers (2023-10-04T22:11:28Z) - Generative Social Choice [31.99162448662916]
We introduce generative social choice, a design methodology for open-ended democratic processes.
We prove that the process representation guarantees when given access to oracle queries.
We empirically validate that these queries can be approximately implemented using a large language model.
arXiv Detail & Related papers (2023-09-03T23:47:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.