From Firms to Computation: AI Governance and the Evolution of Institutions
- URL: http://arxiv.org/abs/2507.13616v1
- Date: Fri, 18 Jul 2025 02:52:58 GMT
- Title: From Firms to Computation: AI Governance and the Evolution of Institutions
- Authors: Michael S. Harre,
- Abstract summary: This article synthesizes three frameworks: multi-level selection theory, Aoki's view of firms as computational processes, and Ostrom's design principles for robust institutions.<n>We develop a framework where selection operates concurrently across organizational levels, firms implement distributed inference via game-theoretic architectures, and Ostrom-style rules evolve as alignment mechanisms that address AI-related risks.<n>We conclude by proposing a set of design principles that operationalize alignment between humans and AI across institutional layers, enabling scalable, adaptive, and inclusive governance of agential AI systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The integration of agential artificial intelligence into socioeconomic systems requires us to reexamine the evolutionary processes that describe changes in our economic institutions. This article synthesizes three frameworks: multi-level selection theory, Aoki's view of firms as computational processes, and Ostrom's design principles for robust institutions. We develop a framework where selection operates concurrently across organizational levels, firms implement distributed inference via game-theoretic architectures, and Ostrom-style rules evolve as alignment mechanisms that address AI-related risks. This synthesis yields a multi-level Price equation expressed over nested games, providing quantitative metrics for how selection and governance co-determine economic outcomes. We examine connections to Acemoglu's work on inclusive institutions, analyze how institutional structures shape AI deployment, and demonstrate the framework's explanatory power via case studies. We conclude by proposing a set of design principles that operationalize alignment between humans and AI across institutional layers, enabling scalable, adaptive, and inclusive governance of agential AI systems. We conclude with practical policy recommendations and further research to extend these principles into real-world implementation.
Related papers
- Agentic AI in Product Management: A Co-Evolutionary Model [0.0]
This study explores agentic AI's transformative role in product management.<n>It proposes a conceptual co-evolutionary framework to guide its integration across the product lifecycle.
arXiv Detail & Related papers (2025-07-01T02:32:32Z) - Distinguishing Autonomous AI Agents from Collaborative Agentic Systems: A Comprehensive Framework for Understanding Modern Intelligent Architectures [0.0]
The emergence of large language models has catalyzed two distinct yet interconnected paradigms in artificial intelligence: standalone AI Agents and collaborative Agentic AI ecosystems.<n>This study establishes a definitive framework for distinguishing these architectures through systematic analysis of their operational principles, structural compositions, and deployment methodologies.
arXiv Detail & Related papers (2025-06-02T08:52:23Z) - Enterprise Architecture as a Dynamic Capability for Scalable and Sustainable Generative AI adoption: Bridging Innovation and Governance in Large Organisations [55.2480439325792]
Generative Artificial Intelligence is a powerful new technology with the potential to boost innovation and reshape governance in many industries.<n>However, organisations face major challenges in scaling GenAI, including technology complexity, governance gaps and resource misalignments.<n>This study explores how Enterprise Architecture Management can meet the complex requirements of GenAI adoption within large enterprises.
arXiv Detail & Related papers (2025-05-09T07:41:33Z) - KERAIA: An Adaptive and Explainable Framework for Dynamic Knowledge Representation and Reasoning [46.85451489222176]
KERAIA is a novel framework and software platform for symbolic knowledge engineering.<n>It addresses the persistent challenges of representing, reasoning with, and executing knowledge in dynamic, complex, and context-sensitive environments.
arXiv Detail & Related papers (2025-05-07T10:56:05Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.<n>Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Towards Developing Ethical Reasoners: Integrating Probabilistic Reasoning and Decision-Making for Complex AI Systems [4.854297874710511]
A computational ethics framework is essential for AI and autonomous systems operating in complex, real-world environments.<n>Existing approaches often lack the adaptability needed to integrate ethical principles into dynamic and ambiguous contexts.<n>We outline the necessary ingredients for building a holistic, meta-level framework that combines intermediate representations, probabilistic reasoning, and knowledge representation.
arXiv Detail & Related papers (2025-02-28T17:25:11Z) - Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications [0.0]
This paper introduces a framework ensuring that AI must be ethical, controllable, viable, and desirable.<n>Different case studies validate this framework by integrating AI in both academic and practical environments.
arXiv Detail & Related papers (2024-09-25T12:39:28Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Levels of AGI for Operationalizing Progress on the Path to AGI [64.59151650272477]
We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence (AGI) models and their precursors.
This framework introduces levels of AGI performance, generality, and autonomy, providing a common language to compare models, assess risks, and measure progress along the path to AGI.
arXiv Detail & Related papers (2023-11-04T17:44:58Z) - A multidomain relational framework to guide institutional AI research
and adoption [0.0]
We argue that research efforts aimed at understanding the implications of adopting AI tend to prioritize only a handful of ideas.
We propose a simple policy and research design tool in the form of a conceptual framework to organize terms across fields.
arXiv Detail & Related papers (2023-03-17T16:33:01Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.