Rethinking Explainable Machine Learning as Applied Statistics
- URL: http://arxiv.org/abs/2402.02870v3
- Date: Mon, 03 Feb 2025 14:03:45 GMT
- Title: Rethinking Explainable Machine Learning as Applied Statistics
- Authors: Sebastian Bordt, Eric Raidl, Ulrike von Luxburg,
- Abstract summary: We argue that explainable machine learning needs to recognize its parallels with applied statistics.
The fact that this is scarcely being discussed in research papers is one of the main drawbacks of the current literature.
- Score: 9.03268085547399
- License:
- Abstract: In the rapidly growing literature on explanation algorithms, it often remains unclear what precisely these algorithms are for and how they should be used. In this position paper, we argue for a novel and pragmatic perspective: Explainable machine learning needs to recognize its parallels with applied statistics. Concretely, explanations are statistics of high-dimensional functions, and we should think about them analogously to traditional statistical quantities. Among others, this implies that we must think carefully about the matter of interpretation, or how the explanations relate to intuitive questions that humans have about the world. The fact that this is scarcely being discussed in research papers is one of the main drawbacks of the current literature. Luckily, the analogy between explainable machine learning and applied statistics suggests fruitful ways for how research practices can be improved.
Related papers
- From Model Explanation to Data Misinterpretation: Uncovering the Pitfalls of Post Hoc Explainers in Business Research [3.7209396288545338]
We find a growing trend in business research where post hoc explanations are used to draw inferences about the data.
The ultimate goal of this paper is to caution business researchers against translating post hoc explanations of machine learning models into potentially false insights and understanding of data.
arXiv Detail & Related papers (2024-08-30T03:22:35Z) - Machine learning and information theory concepts towards an AI
Mathematician [77.63761356203105]
The current state-of-the-art in artificial intelligence is impressive, especially in terms of mastery of language, but not so much in terms of mathematical reasoning.
This essay builds on the idea that current deep learning mostly succeeds at system 1 abilities.
It takes an information-theoretical posture to ask questions about what constitutes an interesting mathematical statement.
arXiv Detail & Related papers (2024-03-07T15:12:06Z) - Visual Abductive Reasoning [85.17040703205608]
Abductive reasoning seeks the likeliest possible explanation for partial observations.
We propose a new task and dataset, Visual Abductive Reasoning ( VAR), for examining abductive reasoning ability of machine intelligence in everyday visual situations.
arXiv Detail & Related papers (2022-03-26T10:17:03Z) - Why Machine Learning Cannot Ignore Maximum Likelihood Estimation [1.7056768055368383]
The growth of machine learning as a field has been accelerating with increasing interest and publications across fields.
Here, we assert that one essential idea is for machine learning to integrate maximum likelihood for estimation of functional parameters.
arXiv Detail & Related papers (2021-10-23T01:57:40Z) - Counterfactual Instances Explain Little [7.655239948659383]
It is important to be able to explain the decisions of machine learning systems.
An increasingly popular approach has been to seek to provide emphcounterfactual instance explanations.
This paper will argue that a satisfactory explanation must consist of both counterfactual instances and a causal equation.
arXiv Detail & Related papers (2021-09-20T19:40:25Z) - Individual Explanations in Machine Learning Models: A Case Study on
Poverty Estimation [63.18666008322476]
Machine learning methods are being increasingly applied in sensitive societal contexts.
The present case study has two main objectives. First, to expose these challenges and how they affect the use of relevant and novel explanations methods.
And second, to present a set of strategies that mitigate such challenges, as faced when implementing explanation methods in a relevant application domain.
arXiv Detail & Related papers (2021-04-09T01:54:58Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Interpretability and Explainability: A Machine Learning Zoo Mini-tour [4.56877715768796]
Interpretability and explainability lie at the core of many machine learning and statistical applications in medicine, economics, law, and natural sciences.
We emphasise the divide between interpretability and explainability and illustrate these two different research directions with concrete examples of the state-of-the-art.
arXiv Detail & Related papers (2020-12-03T10:11:52Z) - High-dimensional inference: a statistical mechanics perspective [11.532173708183166]
Statistical inference is the science of drawing conclusions about some system from data.
It is by now clear that there are many connections between inference and statistical physics.
This article has been published in the issue on artificial intelligence of Ithaca, an Italian popularization-of-science journal.
arXiv Detail & Related papers (2020-10-28T10:17:21Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.