The Case Against Explainability
- URL: http://arxiv.org/abs/2305.12167v1
- Date: Sat, 20 May 2023 10:56:19 GMT
- Title: The Case Against Explainability
- Authors: Hofit Wasserman Rozen, Niva Elkin-Koren, Ran Gilad-Bachrach
- Abstract summary: We show end-user Explainability's inadequacy to fulfil reason-giving's role in law.
We find that end-user Explainability excels in the fourth function, a quality which raises serious risks.
This study calls upon regulators and Machine Learning practitioners to reconsider the widespread pursuit of end-user Explainability.
- Score: 8.991619150027264
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: As artificial intelligence (AI) becomes more prevalent there is a growing
demand from regulators to accompany decisions made by such systems with
explanations. However, a persistent gap exists between the need to execute a
meaningful right to explanation vs. the ability of Machine Learning systems to
deliver on such a legal requirement. The regulatory appeal towards "a right to
explanation" of AI systems can be attributed to the significant role of
explanations, part of the notion called reason-giving, in law. Therefore, in
this work we examine reason-giving's purposes in law to analyze whether reasons
provided by end-user Explainability can adequately fulfill them.
We find that reason-giving's legal purposes include: (a) making a better and
more just decision, (b) facilitating due-process, (c) authenticating human
agency, and (d) enhancing the decision makers' authority. Using this
methodology, we demonstrate end-user Explainabilty's inadequacy to fulfil
reason-giving's role in law, given reason-giving's functions rely on its impact
over a human decision maker. Thus, end-user Explainability fails, or is
unsuitable, to fulfil the first, second and third legal function. In contrast
we find that end-user Explainability excels in the fourth function, a quality
which raises serious risks considering recent end-user Explainability research
trends, Large Language Models' capabilities, and the ability to manipulate
end-users by both humans and machines. Hence, we suggest that in some cases the
right to explanation of AI systems could bring more harm than good to end
users. Accordingly, this study carries some important policy ramifications, as
it calls upon regulators and Machine Learning practitioners to reconsider the
widespread pursuit of end-user Explainability and a right to explanation of AI
systems.
Related papers
- Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills [24.04643864795939]
People's decision-making abilities often fail to improve when they rely on AI for decision-support.
Most AI systems offer "unilateral" explanations that justify the AI's decision but do not account for users' thinking.
We introduce a framework for generating human-centered contrastive explanations that explain the difference between AI's choice and a predicted, likely human choice.
arXiv Detail & Related papers (2024-10-05T18:21:04Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - In Search of Verifiability: Explanations Rarely Enable Complementary
Performance in AI-Advised Decision Making [25.18203172421461]
We argue explanations are only useful to the extent that they allow a human decision maker to verify the correctness of an AI's prediction.
We also compare the objective of complementary performance with that of appropriate reliance, decomposing the latter into the notions of outcome-graded and strategy-graded reliance.
arXiv Detail & Related papers (2023-05-12T18:28:04Z) - Understanding the Role of Human Intuition on Reliance in Human-AI
Decision-Making with Explanations [44.01143305912054]
We study how decision-makers' intuition affects their use of AI predictions and explanations.
Our results identify three types of intuition involved in reasoning about AI predictions and explanations.
We use these pathways to explain why feature-based explanations did not improve participants' decision outcomes and increased their overreliance on AI.
arXiv Detail & Related papers (2023-01-18T01:33:50Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Argumentation-based Agents that Explain their Decisions [0.0]
We focus on how an extended model of BDI (Beliefs-Desires-Intentions) agents can be able to generate explanations about their reasoning.
Our proposal is based on argumentation theory, we use arguments to represent the reasons that lead an agent to make a decision.
We propose two types of explanations: the partial one and the complete one.
arXiv Detail & Related papers (2020-09-13T02:08:10Z) - Flexible and Context-Specific AI Explainability: A Multidisciplinary
Approach [0.8388908302793014]
Machine learning algorithms must be able to explain the inner workings, the results and the causes of failures to users, regulators, and citizens.
This paper proposes a framework for defining the "right" level of explain-ability in a given context.
We identify seven kinds of costs and emphasize that explanations are socially useful only when total social benefits exceed costs.
arXiv Detail & Related papers (2020-03-13T09:12:06Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.