A Survey on Methods and Metrics for the Assessment of Explainability
under the Proposed AI Act
- URL: http://arxiv.org/abs/2110.11168v1
- Date: Thu, 21 Oct 2021 14:27:24 GMT
- Title: A Survey on Methods and Metrics for the Assessment of Explainability
under the Proposed AI Act
- Authors: Francesco Sovrano, Salvatore Sapienza, Monica Palmirani, Fabio Vitali
- Abstract summary: This study identifies the requirements that such a metric should possess to ease compliance with the AI Act.
Our analysis proposes that metrics to measure the kind of explainability endorsed by the proposed AI Act shall be risk-focused, model-agnostic, goal-aware, intelligible & accessible.
- Score: 2.294014185517203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study discusses the interplay between metrics used to measure the
explainability of the AI systems and the proposed EU Artificial Intelligence
Act. A standardisation process is ongoing: several entities (e.g. ISO) and
scholars are discussing how to design systems that are compliant with the
forthcoming Act and explainability metrics play a significant role. This study
identifies the requirements that such a metric should possess to ease
compliance with the AI Act. It does so according to an interdisciplinary
approach, i.e. by departing from the philosophical concept of explainability
and discussing some metrics proposed by scholars and standardisation entities
through the lenses of the explainability obligations set by the proposed AI
Act. Our analysis proposes that metrics to measure the kind of explainability
endorsed by the proposed AI Act shall be risk-focused, model-agnostic,
goal-aware, intelligible & accessible. This is why we discuss the extent to
which these requirements are met by the metrics currently under discussion.
Related papers
- Benchmarks as Microscopes: A Call for Model Metrology [76.64402390208576]
Modern language models (LMs) pose a new challenge in capability assessment.
To be confident in our metrics, we need a new discipline of model metrology.
arXiv Detail & Related papers (2024-07-22T17:52:12Z) - Responsible Artificial Intelligence: A Structured Literature Review [0.0]
The EU has recently issued several publications emphasizing the necessity of trust in AI.
This highlights the urgent need for international regulation.
This paper introduces a comprehensive and, to our knowledge, the first unified definition of responsible AI.
arXiv Detail & Related papers (2024-03-11T17:01:13Z) - Towards a Responsible AI Metrics Catalogue: A Collection of Metrics for
AI Accountability [28.67753149592534]
This study bridges the accountability gap by introducing our effort towards a comprehensive metrics catalogue.
Our catalogue delineates process metrics that underpin procedural integrity, resource metrics that provide necessary tools and frameworks, and product metrics that reflect the outputs of AI systems.
arXiv Detail & Related papers (2023-11-22T04:43:16Z) - Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs [58.620269228776294]
We propose a task-agnostic framework for resolving ambiguity by asking users clarifying questions.
We evaluate systems across three NLP applications: question answering, machine translation and natural language inference.
We find that intent-sim is robust, demonstrating improvements across a wide range of NLP tasks and LMs.
arXiv Detail & Related papers (2023-11-16T00:18:50Z) - Explainability in AI Policies: A Critical Review of Communications,
Reports, Regulations, and Standards in the EU, US, and UK [1.5039745292757671]
We perform the first thematic and gap analysis of policies and standards on explainability in the EU, US, and UK.
We find that policies are often informed by coarse notions and requirements for explanations.
We propose recommendations on how to address explainability in regulations for AI systems.
arXiv Detail & Related papers (2023-04-20T07:53:07Z) - A multidomain relational framework to guide institutional AI research
and adoption [0.0]
We argue that research efforts aimed at understanding the implications of adopting AI tend to prioritize only a handful of ideas.
We propose a simple policy and research design tool in the form of a conceptual framework to organize terms across fields.
arXiv Detail & Related papers (2023-03-17T16:33:01Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - An Objective Metric for Explainable AI: How and Why to Estimate the
Degree of Explainability [3.04585143845864]
We present a new model-agnostic metric to measure the Degree of eXplainability of correct information in an objective way.
We designed a few experiments and a user-study on two realistic AI-based systems for healthcare and finance.
arXiv Detail & Related papers (2021-09-11T17:44:13Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.