Lessons from complexity theory for AI governance
- URL: http://arxiv.org/abs/2502.00012v1
- Date: Tue, 07 Jan 2025 07:56:40 GMT
- Title: Lessons from complexity theory for AI governance
- Authors: Noam Kolt, Michal Shur-Ofry, Reuven Cohen,
- Abstract summary: Complexity theory can help illuminate features of AI that pose central challenges for policymakers.
We examine how efforts to govern AI are marked by deep uncertainty.
We propose a set of complexity-compatible principles concerning the timing and structure of AI governance.
- Score: 1.6122472145662998
- License:
- Abstract: The study of complex adaptive systems, pioneered in physics, biology, and the social sciences, offers important lessons for AI governance. Contemporary AI systems and the environments in which they operate exhibit many of the properties characteristic of complex systems, including nonlinear growth patterns, emergent phenomena, and cascading effects that can lead to tail risks. Complexity theory can help illuminate the features of AI that pose central challenges for policymakers, such as feedback loops induced by training AI models on synthetic data and the interconnectedness between AI systems and critical infrastructure. Drawing on insights from other domains shaped by complex systems, including public health and climate change, we examine how efforts to govern AI are marked by deep uncertainty. To contend with this challenge, we propose a set of complexity-compatible principles concerning the timing and structure of AI governance, and the risk thresholds that should trigger regulatory intervention.
Related papers
- Position: Emergent Machina Sapiens Urge Rethinking Multi-Agent Paradigms [6.285314639722078]
We argue that AI agents should be empowered to dynamically adjust their objectives.
We call for a shift toward the emergent, self-organizing, and context-aware nature of these systems.
arXiv Detail & Related papers (2025-02-05T22:20:15Z) - Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Navigating the sociotechnical labyrinth: Dynamic certification for responsible embodied AI [19.959138971887395]
We argue that sociotechnical requirements shape the governance of artificially intelligent (AI) systems.
Our proposed transdisciplinary approach is designed to ensure the safe, ethical, and practical deployment of AI systems.
arXiv Detail & Related papers (2024-08-16T08:35:26Z) - Beyond Accidents and Misuse: Decoding the Structural Risk Dynamics of Artificial Intelligence [0.0]
This paper explores the concept of structural risks associated with the rapid integration of advanced AI systems across social, economic, and political systems.
By analyzing the interactions between technological advancements and social dynamics, this study isolates three primary categories of structural risk.
We present a comprehensive framework to understand the causal chains that drive these risks, highlighting the interdependence between structural forces and the more proximate risks of misuse and system failures.
arXiv Detail & Related papers (2024-06-21T05:44:50Z) - Attributing Responsibility in AI-Induced Incidents: A Computational Reflective Equilibrium Framework for Accountability [13.343937277604892]
The pervasive integration of Artificial Intelligence (AI) has introduced complex challenges in the responsibility and accountability in the event of incidents involving AI-enabled systems.
This work proposes a coherent and ethically acceptable responsibility attribution framework for all stakeholders.
arXiv Detail & Related papers (2024-04-25T18:11:03Z) - Quantifying AI Vulnerabilities: A Synthesis of Complexity, Dynamical Systems, and Game Theory [0.0]
We propose a novel approach that introduces three metrics: System Complexity Index (SCI), Lyapunov Exponent for AI Stability (LEAIS), and Nash Equilibrium Robustness (NER)
SCI quantifies the inherent complexity of an AI system, LEAIS captures its stability and sensitivity to perturbations, and NER evaluates its strategic robustness against adversarial manipulation.
arXiv Detail & Related papers (2024-04-07T07:05:59Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Causal Reasoning: Charting a Revolutionary Course for Next-Generation
AI-Native Wireless Networks [63.246437631458356]
Next-generation wireless networks (e.g., 6G) will be artificial intelligence (AI)-native.
This article introduces a novel framework for building AI-native wireless networks; grounded in the emerging field of causal reasoning.
We highlight several wireless networking challenges that can be addressed by causal discovery and representation.
arXiv Detail & Related papers (2023-09-23T00:05:39Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.