Levers of Power in the Field of AI
- URL: http://arxiv.org/abs/2511.03859v1
- Date: Wed, 05 Nov 2025 21:03:57 GMT
- Title: Levers of Power in the Field of AI
- Authors: Tammy Mackenzie, Sukriti Punj, Natalie Perez, Sreyoshi Bhaduri, Branislav Radeljic,
- Abstract summary: The study explores how individuals experience and exercise levers of power, which are presented as social mechanisms that shape institutional responses to technological change.<n>The study reports on the responses of personalized questionnaires designed to gather insight on a decision maker's institutional purview.
- Score: 0.3089233075079027
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper examines how decision makers in academia, government, business, and civil society navigate questions of power in implementations of artificial intelligence. The study explores how individuals experience and exercise levers of power, which are presented as social mechanisms that shape institutional responses to technological change. The study reports on the responses of personalized questionnaires designed to gather insight on a decision maker's institutional purview, based on an institutional governance framework developed from the work of Neo-institutionalists. Findings present the anonymized, real responses and circumstances of respondents in the form of twelve fictional personas of high-level decision makers from North America and Europe. These personas illustrate how personal agency, organizational logics, and institutional infrastructures may intersect in the governance of AI. The decision makers' responses to the questionnaires then inform a discussion of the field-level personal power of decision makers, methods of fostering institutional stability in times of change, and methods of influencing institutional change in the field of AI. The final section of the discussion presents a table of the dynamics of the levers of power in the field of AI for change makers and five testable hypotheses for institutional and social movement researchers. In summary, this study provides insight on the means for policymakers within institutions and their counterparts in civil society to personally engage with AI governance.
Related papers
- The Digital Gorilla: Rebalancing Power in the Age of AI [0.0]
Article offers a conceptual foundation for AI governance by treating such systems as a fourth societal actor.<n>It develops a Four Societal Actors framework that maps how power flows among these actors across five power modalities.<n>It advances a federalized, polycentric governance architecture and institutionalizes dynamic checks and balances.
arXiv Detail & Related papers (2026-02-23T17:46:54Z) - Structural transparency of societal AI alignment through Institutional Logics [2.320417845168326]
We develop a framework of emphstructural transparency for analyzing organizational and institutional decisions concerning AI alignment.<n>We operationalize the framework through five analytical components, each with an accompanying "analyst recipe"
arXiv Detail & Related papers (2026-02-09T03:51:20Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.<n>Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - AI and the Transformation of Accountability and Discretion in Urban Governance [1.9152655229960793]
The study synthesizes insights to propose guiding principles for responsible AI integration in decision-making processes.<n>The analysis argues that AI does not simply restrict or enhance discretion but redistributes it across institutional levels.<n>It may simultaneously strengthen managerial oversight, enhance decision-making consistency, and improve operational efficiency.
arXiv Detail & Related papers (2025-02-18T18:11:39Z) - Aligning AI with Public Values: Deliberation and Decision-Making for Governing Multimodal LLMs in Political Video Analysis [48.14390493099495]
How AI models should deal with political topics has been discussed, but it remains challenging and requires better governance.<n>This paper examines the governance of large language models through individual and collective deliberation, focusing on politically sensitive videos.
arXiv Detail & Related papers (2024-09-15T03:17:38Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Studying Up Public Sector AI: How Networks of Power Relations Shape Agency Decisions Around AI Design and Use [29.52245155918532]
We study public sector AI around those who have the power and responsibility to make decisions about the role that AI tools will play in their agency.
Our findings shed light on how infrastructural, legal, and social factors create barriers and disincentives to the involvement of a broader range of stakeholders in decisions about AI design and adoption.
arXiv Detail & Related papers (2024-05-21T02:31:26Z) - A University Framework for the Responsible use of Generative AI in Research [0.0]
Generative Artificial Intelligence (generative AI) poses both opportunities and risks for the integrity of research.
We propose a framework to help institutions promote and facilitate the responsible use of generative AI.
arXiv Detail & Related papers (2024-04-30T04:00:15Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Foundations for the Future: Institution building for the purpose of
Artificial Intelligence governance [0.0]
Governance efforts for artificial intelligence (AI) are taking on increasingly more concrete forms.
New institutions will need to be established on a national and international level.
This paper sketches a blueprint of such institutions, and conducts in-depth investigations of three key components of any future AI governance institutions.
arXiv Detail & Related papers (2021-10-01T10:45:04Z) - A Framework for Understanding AI-Induced Field Change: How AI
Technologies are Legitimized and Institutionalized [0.0]
This paper presents a conceptual framework to analyze and understand AI-induced field-change.
The introduction of novel AI-agents into new or existing fields creates a dynamic in which algorithms (re)shape organizations and institutions.
The institutional infrastructure surrounding AI-induced fields is generally little elaborated, which could be an obstacle to the broader institutionalization of AI-systems going forward.
arXiv Detail & Related papers (2021-08-18T14:06:08Z) - "A cold, technical decision-maker": Can AI provide explainability,
negotiability, and humanity? [47.36687555570123]
We present results of a qualitative study of algorithmic decision-making, comprised of five workshops conducted with a total of 60 participants.
We discuss participants' consideration of humanity in decision-making, and introduce the concept of 'negotiability,' the ability to go beyond formal criteria and work flexibly around the system.
arXiv Detail & Related papers (2020-12-01T22:36:54Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.