Oversight for Frontier AI through a Know-Your-Customer Scheme for
Compute Providers
- URL: http://arxiv.org/abs/2310.13625v1
- Date: Fri, 20 Oct 2023 16:17:29 GMT
- Title: Oversight for Frontier AI through a Know-Your-Customer Scheme for
Compute Providers
- Authors: Janet Egan and Lennart Heim
- Abstract summary: Know-Your-Customer (KYC) is a standard developed by the banking sector to identify and verify client identity.
KYC could provide a mechanism for greater public oversight of frontier AI development and close loopholes in existing export controls.
Unlike the strategy of limiting access to AI chip purchases, regulating the digital access to compute offers more precise controls.
- Score: 0.8547032097715571
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To address security and safety risks stemming from highly capable artificial
intelligence (AI) models, we propose that the US government should ensure
compute providers implement Know-Your-Customer (KYC) schemes. Compute - the
computational power and infrastructure required to train and run these AI
models - is emerging as a node for oversight. KYC, a standard developed by the
banking sector to identify and verify client identity, could provide a
mechanism for greater public oversight of frontier AI development and close
loopholes in existing export controls. Such a scheme has the potential to
identify and warn stakeholders of potentially problematic and/or sudden
advancements in AI capabilities, build government capacity for AI regulation,
and allow for the development and implementation of more nuanced and targeted
export controls. Unlike the strategy of limiting access to AI chip purchases,
regulating the digital access to compute offers more precise controls, allowing
regulatory control over compute quantities, as well as the flexibility to
suspend access at any time. To enact a KYC scheme, the US government will need
to work closely with industry to (1) establish a dynamic threshold of compute
that effectively captures high-risk frontier model development, while
minimizing imposition on developers not engaged in frontier AI; (2) set
requirements and guidance for compute providers to keep records and report
high-risk entities; (3) establish government capacity that allows for
co-design, implementation, administration and enforcement of the scheme; and
(4) engage internationally to promote international alignment with the scheme
and support its long-term efficacy. While the scheme will not address all AI
risks, it complements proposed solutions by allowing for a more precise and
flexible approach to controlling the development of frontier AI models and
unwanted AI proliferation.
Related papers
- How Could Generative AI Support Compliance with the EU AI Act? A Review for Safe Automated Driving Perception [4.075971633195745]
Deep Neural Networks (DNNs) have become central for the perception functions of autonomous vehicles.
The European Union (EU) Artificial Intelligence (AI) Act aims to address these challenges by establishing stringent norms and standards for AI systems.
This review paper summarizes the requirements arising from the EU AI Act regarding DNN-based perception systems and systematically categorizes existing generative AI applications in AD.
arXiv Detail & Related papers (2024-08-30T12:01:06Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Governing Through the Cloud: The Intermediary Role of Compute Providers in AI Regulation [14.704747149179047]
We argue that compute providers should have legal obligations and ethical responsibilities associated with AI development and deployment.
Compute providers can play an essential role in a regulatory ecosystem via four key capacities.
arXiv Detail & Related papers (2024-03-13T13:08:16Z) - Computing Power and the Governance of Artificial Intelligence [51.967584623262674]
Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
arXiv Detail & Related papers (2024-02-13T21:10:21Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - International Institutions for Advanced AI [47.449762587672986]
International institutions may have an important role to play in ensuring advanced AI systems benefit humanity.
This paper identifies a set of governance functions that could be performed at an international level to address these challenges.
It groups these functions into four institutional models that exhibit internal synergies and have precedents in existing organizations.
arXiv Detail & Related papers (2023-07-10T16:55:55Z) - AI and the EU Digital Markets Act: Addressing the Risks of Bigness in
Generative AI [4.889410481341167]
This paper argues for integrating certain AI software as core platform services and classifying certain developers as gatekeepers under the DMA.
As the EU considers generative AI-specific rules and possible DMA amendments, this paper provides insights towards diversity and openness in generative AI services.
arXiv Detail & Related papers (2023-07-07T16:50:08Z) - Frontier AI Regulation: Managing Emerging Risks to Public Safety [15.85618115026625]
"Frontier AI" models could possess dangerous capabilities sufficient to pose severe risks to public safety.
Industry self-regulation is an important first step.
We propose an initial set of safety standards.
arXiv Detail & Related papers (2023-07-06T17:03:25Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.