Computing Power and the Governance of Artificial Intelligence
- URL: http://arxiv.org/abs/2402.08797v1
- Date: Tue, 13 Feb 2024 21:10:21 GMT
- Title: Computing Power and the Governance of Artificial Intelligence
- Authors: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles
Brundage, Julian Hazell, Cullen O'Keefe, Gillian K. Hadfield, Richard Ngo,
Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert
F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, Diane Coyle
- Abstract summary: Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
- Score: 51.967584623262674
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computing power, or "compute," is crucial for the development and deployment
of artificial intelligence (AI) capabilities. As a result, governments and
companies have started to leverage compute as a means to govern AI. For
example, governments are investing in domestic compute capacity, controlling
the flow of compute to competing countries, and subsidizing compute access to
certain sectors. However, these efforts only scratch the surface of how compute
can be used to govern AI development and deployment. Relative to other key
inputs to AI (data and algorithms), AI-relevant compute is a particularly
effective point of intervention: it is detectable, excludable, and
quantifiable, and is produced via an extremely concentrated supply chain. These
characteristics, alongside the singular importance of compute for cutting-edge
AI models, suggest that governing compute can contribute to achieving common
policy objectives, such as ensuring the safety and beneficial use of AI. More
precisely, policymakers could use compute to facilitate regulatory visibility
of AI, allocate resources to promote beneficial outcomes, and enforce
restrictions against irresponsible or malicious AI development and usage.
However, while compute-based policies and technologies have the potential to
assist in these areas, there is significant variation in their readiness for
implementation. Some ideas are currently being piloted, while others are
hindered by the need for fundamental research. Furthermore, naive or poorly
scoped approaches to compute governance carry significant risks in areas like
privacy, economic impacts, and centralization of power. We end by suggesting
guardrails to minimize these risks from compute governance.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - Generative AI Needs Adaptive Governance [0.0]
generative AI challenges the notions of governance, trust, and human agency.
This paper argues that generative AI calls for adaptive governance.
We outline actors, roles, as well as both shared and actors-specific policy activities.
arXiv Detail & Related papers (2024-06-06T23:47:14Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - Increased Compute Efficiency and the Diffusion of AI Capabilities [1.1838866556981258]
Training advanced AI models requires large investments in computational resources, or compute.
As hardware innovation reduces the price of compute and algorithmic advances make its use more efficient, the cost of training an AI model to a given performance falls over time.
We find that while an access effect increases the number of actors who can train models to a given performance over time, a performance effect simultaneously increases the performance available to each actor.
arXiv Detail & Related papers (2023-11-26T18:36:28Z) - Trust, Accountability, and Autonomy in Knowledge Graph-based AI for
Self-determination [1.4305544869388402]
Knowledge Graphs (KGs) have emerged as fundamental platforms for powering intelligent decision-making.
The integration of KGs with neuronal learning is currently a topic of active research.
This paper conceptualises the foundational topics and research pillars to support KG-based AI for self-determination.
arXiv Detail & Related papers (2023-10-30T12:51:52Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Oversight for Frontier AI through a Know-Your-Customer Scheme for
Compute Providers [0.8547032097715571]
Know-Your-Customer (KYC) is a standard developed by the banking sector to identify and verify client identity.
KYC could provide a mechanism for greater public oversight of frontier AI development and close loopholes in existing export controls.
Unlike the strategy of limiting access to AI chip purchases, regulating the digital access to compute offers more precise controls.
arXiv Detail & Related papers (2023-10-20T16:17:29Z) - AI Assurance using Causal Inference: Application to Public Policy [0.0]
Most AI approaches can only be represented as "black boxes" and suffer from the lack of transparency.
It is crucial not only to develop effective and robust AI systems, but to make sure their internal processes are explainable and fair.
arXiv Detail & Related papers (2021-12-01T16:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.