Explainability in AI Policies: A Critical Review of Communications,
Reports, Regulations, and Standards in the EU, US, and UK
- URL: http://arxiv.org/abs/2304.11218v1
- Date: Thu, 20 Apr 2023 07:53:07 GMT
- Title: Explainability in AI Policies: A Critical Review of Communications,
Reports, Regulations, and Standards in the EU, US, and UK
- Authors: Luca Nannini, Agathe Balayn, Adam Leon Smith
- Abstract summary: We perform the first thematic and gap analysis of policies and standards on explainability in the EU, US, and UK.
We find that policies are often informed by coarse notions and requirements for explanations.
We propose recommendations on how to address explainability in regulations for AI systems.
- Score: 1.5039745292757671
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Public attention towards explainability of artificial intelligence (AI)
systems has been rising in recent years to offer methodologies for human
oversight. This has translated into the proliferation of research outputs, such
as from Explainable AI, to enhance transparency and control for system
debugging and monitoring, and intelligibility of system process and output for
user services. Yet, such outputs are difficult to adopt on a practical level
due to a lack of a common regulatory baseline, and the contextual nature of
explanations. Governmental policies are now attempting to tackle such exigence,
however it remains unclear to what extent published communications,
regulations, and standards adopt an informed perspective to support research,
industry, and civil interests. In this study, we perform the first thematic and
gap analysis of this plethora of policies and standards on explainability in
the EU, US, and UK. Through a rigorous survey of policy documents, we first
contribute an overview of governmental regulatory trajectories within AI
explainability and its sociotechnical impacts. We find that policies are often
informed by coarse notions and requirements for explanations. This might be due
to the willingness to conciliate explanations foremost as a risk management
tool for AI oversight, but also due to the lack of a consensus on what
constitutes a valid algorithmic explanation, and how feasible the
implementation and deployment of such explanations are across stakeholders of
an organization. Informed by AI explainability research, we conduct a gap
analysis of existing policies, leading us to formulate a set of recommendations
on how to address explainability in regulations for AI systems, especially
discussing the definition, feasibility, and usability of explanations, as well
as allocating accountability to explanation providers.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - The Artificial Intelligence Act: critical overview [0.0]
This article provides a critical overview of the recently approved Artificial Intelligence Act.
It starts by presenting the main structure, objectives, and approach of Regulation (EU) 2024/1689.
The text concludes that even if the overall framework can be deemed adequate and balanced, the approach is so complex that it risks defeating its own purpose.
arXiv Detail & Related papers (2024-08-30T21:38:02Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - False Sense of Security in Explainable Artificial Intelligence (XAI) [3.298597939573779]
We argue that AI regulations and current market conditions threaten effective AI governance and safety.
Unless governments explicitly tackle the issue of explainability through clear legislative and policy statements, AI governance risks becoming a vacuous "box-ticking" exercise.
arXiv Detail & Related papers (2024-05-06T20:02:07Z) - Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - 'Team-in-the-loop': Ostrom's IAD framework 'rules in use' to map and measure contextual impacts of AI [0.0]
This article explores how the 'rules in use' from Ostrom's Institutional Analysis and Development Framework (IAD) can be developed as a context analysis approach for AI.
arXiv Detail & Related papers (2023-03-24T14:01:00Z) - Tackling problems, harvesting benefits -- A systematic review of the
regulatory debate around AI [0.0]
How to integrate an emerging and all-pervasive technology such as AI into the structures and operations of our society is a question of contemporary politics, science and public debate.
This article analyzes the academic debate around the regulation of artificial intelligence (AI)
The analysis concentrates on societal risks and harms, questions of regulatory responsibility, and possible adequate policy frameworks.
arXiv Detail & Related papers (2022-09-07T11:29:30Z) - Accountability in AI: From Principles to Industry-specific Accreditation [4.033641609534416]
Recent AI-related scandals have shed a spotlight on accountability in AI.
This paper draws on literature from public policy and governance to make two contributions.
arXiv Detail & Related papers (2021-10-08T16:37:11Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.