Foundations for the Future: Institution building for the purpose of
Artificial Intelligence governance
- URL: http://arxiv.org/abs/2110.09238v1
- Date: Fri, 1 Oct 2021 10:45:04 GMT
- Title: Foundations for the Future: Institution building for the purpose of
Artificial Intelligence governance
- Authors: Charlotte Stix
- Abstract summary: Governance efforts for artificial intelligence (AI) are taking on increasingly more concrete forms.
New institutions will need to be established on a national and international level.
This paper sketches a blueprint of such institutions, and conducts in-depth investigations of three key components of any future AI governance institutions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Governance efforts for artificial intelligence (AI) are taking on
increasingly more concrete forms, drawing on a variety of approaches and
instruments from hard regulation to standardisation efforts, aimed at
mitigating challenges from high-risk AI systems. To implement these and other
efforts, new institutions will need to be established on a national and
international level. This paper sketches a blueprint of such institutions, and
conducts in-depth investigations of three key components of any future AI
governance institutions, exploring benefits and associated drawbacks: (1)
purpose, relating to the institution's overall goals and scope of work or
mandate; (2) geography, relating to questions of participation and the reach of
jurisdiction; and (3) capacity, the infrastructural and human make-up of the
institution. Subsequently, the paper highlights noteworthy aspects of various
institutional roles specifically around questions of institutional purpose, and
frames what these could look like in practice, by placing these debates in a
European context and proposing different iterations of a European AI Agency.
Finally, conclusions and future research directions are proposed.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - The potential functions of an international institution for AI safety. Insights from adjacent policy areas and recent trends [0.0]
The OECD, the G7, the G20, UNESCO, and the Council of Europe have already started developing frameworks for ethical and responsible AI governance.
This chapter reflects on what functions an international AI safety institute could perform.
arXiv Detail & Related papers (2024-08-31T10:04:53Z) - A University Framework for the Responsible use of Generative AI in Research [0.0]
Generative Artificial Intelligence (generative AI) poses both opportunities and risks for the integrity of research.
We propose a framework to help institutions promote and facilitate the responsible use of generative AI.
arXiv Detail & Related papers (2024-04-30T04:00:15Z) - Report of the 1st Workshop on Generative AI and Law [78.62063815165968]
This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw)
A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI.
arXiv Detail & Related papers (2023-11-11T04:13:37Z) - Automating the Analysis of Institutional Design in International
Agreements [52.77024349608834]
The developed tool utilizes techniques such as collecting legal documents, annotating them with Institutional Grammar, and using graph analysis to explore the formal institutional design.
The system was tested against the 2003 UNESCO Convention for the Safeguarding of the Intangible Cultural Heritage.
arXiv Detail & Related papers (2023-05-26T08:57:11Z) - A multidomain relational framework to guide institutional AI research
and adoption [0.0]
We argue that research efforts aimed at understanding the implications of adopting AI tend to prioritize only a handful of ideas.
We propose a simple policy and research design tool in the form of a conceptual framework to organize terms across fields.
arXiv Detail & Related papers (2023-03-17T16:33:01Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - A Framework for Understanding AI-Induced Field Change: How AI
Technologies are Legitimized and Institutionalized [0.0]
This paper presents a conceptual framework to analyze and understand AI-induced field-change.
The introduction of novel AI-agents into new or existing fields creates a dynamic in which algorithms (re)shape organizations and institutions.
The institutional infrastructure surrounding AI-induced fields is generally little elaborated, which could be an obstacle to the broader institutionalization of AI-systems going forward.
arXiv Detail & Related papers (2021-08-18T14:06:08Z) - Progressing Towards Responsible AI [2.191505742658975]
Observatory on Society and Artificial Intelligence (OSAI) grew out of the project AI4EU.
OSAI aims to stimulate reflection on a broad spectrum of issues of AI (ethical, legal, social, economic and cultural)
arXiv Detail & Related papers (2020-08-11T09:46:00Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - Hacia los Comit\'es de \'Etica en Inteligencia Artificial [68.8204255655161]
It is priority to create the rules and specialized organizations that can oversight the following of such rules.
This work proposes the creation, at the universities, of Ethical Committees or Commissions specialized on Artificial Intelligence.
arXiv Detail & Related papers (2020-02-11T23:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.