Towards Regulatable AI Systems: Technical Gaps and Policy Opportunities
- URL: http://arxiv.org/abs/2306.12609v2
- Date: Wed, 27 Mar 2024 07:11:30 GMT
- Title: Towards Regulatable AI Systems: Technical Gaps and Policy Opportunities
- Authors: Xudong Shen, Hannah Brown, Jiashu Tao, Martin Strobel, Yao Tong, Akshay Narayan, Harold Soh, Finale Doshi-Velez,
- Abstract summary: We consider the technical half of the question: To what extent can AI experts vet an AI system for adherence to regulatory requirements?
We investigate this question through the lens of two public sector procurement checklists.
- Score: 26.50898051963262
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is increasing attention being given to how to regulate AI systems. As governing bodies grapple with what values to encapsulate into regulation, we consider the technical half of the question: To what extent can AI experts vet an AI system for adherence to regulatory requirements? We investigate this question through the lens of two public sector procurement checklists, identifying what we can do now, what should be possible with technical innovation, and what requirements need a more interdisciplinary approach.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Navigating the sociotechnical labyrinth: Dynamic certification for responsible embodied AI [19.959138971887395]
We argue that sociotechnical requirements shape the governance of artificially intelligent (AI) systems.
Our proposed transdisciplinary approach is designed to ensure the safe, ethical, and practical deployment of AI systems.
arXiv Detail & Related papers (2024-08-16T08:35:26Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - A Blueprint for Auditing Generative AI [0.9999629695552196]
generative AI systems display emergent capabilities and are adaptable to a wide range of downstream tasks.
Existing auditing procedures fail to address the governance challenges posed by generative AI systems.
We propose a three-layered approach, whereby governance audits of technology providers that design and disseminate generative AI systems, model audits of generative AI systems after pre-training but prior to their release, and application audits of applications based on top of generative AI systems.
arXiv Detail & Related papers (2024-07-07T11:56:54Z) - Human Oversight of Artificial Intelligence and Technical Standardisation [0.0]
Within the global governance of AI, the requirement for human oversight is embodied in several regulatory formats.
The EU legislator is therefore going much further than in the past in "spelling out" the legal requirement for human oversight.
The question of the place of humans in the AI decision-making process should be given particular attention.
arXiv Detail & Related papers (2024-07-02T07:43:46Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - A Pragmatic Approach to Regulating Artificial Intelligence: A Technology
Regulator's Perspective [1.614803913005309]
We present a pragmatic approach for providing a technology assurance regulatory framework.
It is proposed that such regulation should not be mandated for all AI-based systems.
arXiv Detail & Related papers (2021-04-15T16:49:29Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.