Operationalizing the Blueprint for an AI Bill of Rights: Recommendations for Practitioners, Researchers, and Policy Makers
- URL: http://arxiv.org/abs/2407.08689v1
- Date: Thu, 11 Jul 2024 17:28:07 GMT
- Title: Operationalizing the Blueprint for an AI Bill of Rights: Recommendations for Practitioners, Researchers, and Policy Makers
- Authors: Alex Oesterling, Usha Bhalla, Suresh Venkatasubramanian, Himabindu Lakkaraju,
- Abstract summary: Several regulatory frameworks have been introduced by different countries worldwide.
Many of these frameworks emphasize the need for auditing and improving the trustworthiness of AI tools.
Although these regulatory frameworks highlight the necessity of enforcement, practitioners often lack detailed guidance on implementing them.
We provide easy-to-understand summaries of state-of-the-art literature and highlight various gaps that exist between regulatory guidelines and existing AI research.
- Score: 20.16404495546234
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As Artificial Intelligence (AI) tools are increasingly employed in diverse real-world applications, there has been significant interest in regulating these tools. To this end, several regulatory frameworks have been introduced by different countries worldwide. For example, the European Union recently passed the AI Act, the White House issued an Executive Order on safe, secure, and trustworthy AI, and the White House Office of Science and Technology Policy issued the Blueprint for an AI Bill of Rights (AI BoR). Many of these frameworks emphasize the need for auditing and improving the trustworthiness of AI tools, underscoring the importance of safety, privacy, explainability, fairness, and human fallback options. Although these regulatory frameworks highlight the necessity of enforcement, practitioners often lack detailed guidance on implementing them. Furthermore, the extensive research on operationalizing each of these aspects is frequently buried in technical papers that are difficult for practitioners to parse. In this write-up, we address this shortcoming by providing an accessible overview of existing literature related to operationalizing regulatory principles. We provide easy-to-understand summaries of state-of-the-art literature and highlight various gaps that exist between regulatory guidelines and existing AI research, including the trade-offs that emerge during operationalization. We hope that this work not only serves as a starting point for practitioners interested in learning more about operationalizing the regulatory guidelines outlined in the Blueprint for an AI BoR but also provides researchers with a list of critical open problems and gaps between regulations and state-of-the-art AI research. Finally, we note that this is a working paper and we invite feedback in line with the purpose of this document as described in the introduction.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - Position Paper: Technical Research and Talent is Needed for Effective AI Governance [0.0]
We survey policy documents published by public-sector institutions in the EU, US, and China.
We highlight specific areas of disconnect between the technical requirements necessary for enacting proposed policy actions, and the current technical state of the art.
Our analysis motivates a call for tighter integration of the AI/ML research community within AI governance.
arXiv Detail & Related papers (2024-06-11T06:32:28Z) - Responsible Artificial Intelligence: A Structured Literature Review [0.0]
The EU has recently issued several publications emphasizing the necessity of trust in AI.
This highlights the urgent need for international regulation.
This paper introduces a comprehensive and, to our knowledge, the first unified definition of responsible AI.
arXiv Detail & Related papers (2024-03-11T17:01:13Z) - Responsible AI Governance: A Systematic Literature Review [8.318630741859113]
This paper aims to examine the existing literature on AI Governance.
The focus of this study is to analyse the literature to answer key questions: WHO is accountable for AI systems' governance, WHAT elements are being governed, WHEN governance occurs within the AI development life cycle, and HOW it is executed through various mechanisms like frameworks, tools, standards, policies, or models.
The findings of this study provides a foundational basis for future research and development of comprehensive governance models that align with RAI principles.
arXiv Detail & Related papers (2023-12-18T05:22:36Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Report of the 1st Workshop on Generative AI and Law [78.62063815165968]
This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw)
A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI.
arXiv Detail & Related papers (2023-11-11T04:13:37Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - A Pragmatic Approach to Regulating Artificial Intelligence: A Technology
Regulator's Perspective [1.614803913005309]
We present a pragmatic approach for providing a technology assurance regulatory framework.
It is proposed that such regulation should not be mandated for all AI-based systems.
arXiv Detail & Related papers (2021-04-15T16:49:29Z) - AI Governance for Businesses [2.072259480917207]
It aims at leveraging AI through effective use of data and minimization of AI-related cost and risk.
This work views AI products as systems, where key functionality is delivered by machine learning (ML) models leveraging (training) data.
Our framework decomposes AI governance into governance of data, (ML) models and (AI) systems along four dimensions.
arXiv Detail & Related papers (2020-11-20T22:31:37Z) - How Does NLP Benefit Legal System: A Summary of Legal Artificial
Intelligence [81.04070052740596]
Legal Artificial Intelligence (LegalAI) focuses on applying the technology of artificial intelligence, especially natural language processing, to benefit tasks in the legal domain.
This paper introduces the history, the current state, and the future directions of research in LegalAI.
arXiv Detail & Related papers (2020-04-25T14:45:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.