Flexible and Context-Specific AI Explainability: A Multidisciplinary
Approach
- URL: http://arxiv.org/abs/2003.07703v1
- Date: Fri, 13 Mar 2020 09:12:06 GMT
- Title: Flexible and Context-Specific AI Explainability: A Multidisciplinary
Approach
- Authors: Val\'erie Beaudouin (SES), Isabelle Bloch (IMAGES), David Bounie (IP
Paris, ECOGE, SES), St\'ephan Cl\'emen\c{c}on (LPMA), Florence d'Alch\'e-Buc,
James Eagan (DIVA), Winston Maxwell, Pavlo Mozharovskyi (IRMAR), Jayneel
Parekh
- Abstract summary: Machine learning algorithms must be able to explain the inner workings, the results and the causes of failures to users, regulators, and citizens.
This paper proposes a framework for defining the "right" level of explain-ability in a given context.
We identify seven kinds of costs and emphasize that explanations are socially useful only when total social benefits exceed costs.
- Score: 0.8388908302793014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent enthusiasm for artificial intelligence (AI) is due principally to
advances in deep learning. Deep learning methods are remarkably accurate, but
also opaque, which limits their potential use in safety-critical applications.
To achieve trust and accountability, designers and operators of machine
learning algorithms must be able to explain the inner workings, the results and
the causes of failures of algorithms to users, regulators, and citizens. The
originality of this paper is to combine technical, legal and economic aspects
of explainability to develop a framework for defining the "right" level of
explain-ability in a given context. We propose three logical steps: First,
define the main contextual factors, such as who the audience of the explanation
is, the operational context, the level of harm that the system could cause, and
the legal/regulatory framework. This step will help characterize the
operational and legal needs for explanation, and the corresponding social
benefits. Second, examine the technical tools available, including post hoc
approaches (input perturbation, saliency maps...) and hybrid AI approaches.
Third, as function of the first two steps, choose the right levels of global
and local explanation outputs, taking into the account the costs involved. We
identify seven kinds of costs and emphasize that explanations are socially
useful only when total social benefits exceed costs.
Related papers
- The Case Against Explainability [8.991619150027264]
We show end-user Explainability's inadequacy to fulfil reason-giving's role in law.
We find that end-user Explainability excels in the fourth function, a quality which raises serious risks.
This study calls upon regulators and Machine Learning practitioners to reconsider the widespread pursuit of end-user Explainability.
arXiv Detail & Related papers (2023-05-20T10:56:19Z) - Textual Explanations and Critiques in Recommendation Systems [8.406549970145846]
dissertation focuses on two fundamental challenges of addressing this need.
The first involves explanation generation in a scalable and data-driven manner.
The second challenge consists in making explanations actionable, and we refer to it as critiquing.
arXiv Detail & Related papers (2022-05-15T11:59:23Z) - An Objective Metric for Explainable AI: How and Why to Estimate the
Degree of Explainability [3.04585143845864]
We present a new model-agnostic metric to measure the Degree of eXplainability of correct information in an objective way.
We designed a few experiments and a user-study on two realistic AI-based systems for healthcare and finance.
arXiv Detail & Related papers (2021-09-11T17:44:13Z) - CausalCity: Complex Simulations with Agency for Causal Discovery and
Reasoning [68.74447489372037]
We present a high-fidelity simulation environment that is designed for developing algorithms for causal discovery and counterfactual reasoning.
A core component of our work is to introduce textitagency, such that it is simple to define and create complex scenarios.
We perform experiments with three state-of-the-art methods to create baselines and highlight the affordances of this environment.
arXiv Detail & Related papers (2021-06-25T00:21:41Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Individual Explanations in Machine Learning Models: A Case Study on
Poverty Estimation [63.18666008322476]
Machine learning methods are being increasingly applied in sensitive societal contexts.
The present case study has two main objectives. First, to expose these challenges and how they affect the use of relevant and novel explanations methods.
And second, to present a set of strategies that mitigate such challenges, as faced when implementing explanation methods in a relevant application domain.
arXiv Detail & Related papers (2021-04-09T01:54:58Z) - Explaining Black-Box Algorithms Using Probabilistic Contrastive
Counterfactuals [7.727206277914709]
We propose a principled causality-based approach for explaining black-box decision-making systems.
We show how such counterfactuals can quantify the direct and indirect influences of a variable on decisions made by an algorithm.
We show how such counterfactuals can provide actionable recourse for individuals negatively affected by the algorithm's decision.
arXiv Detail & Related papers (2021-03-22T16:20:21Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.