Most General Explanations of Tree Ensembles (Extended Version)
- URL: http://arxiv.org/abs/2505.10991v3
- Date: Tue, 20 May 2025 02:10:09 GMT
- Title: Most General Explanations of Tree Ensembles (Extended Version)
- Authors: Yacine Izza, Alexey Ignatiev, Sasha Rubin, Joao Marques-Silva, Peter J. Stuckey,
- Abstract summary: Key question of an AI system is why was this decision made this way''<n>We show how to find a most general abductive explanation for an AI decision.
- Score: 39.367378161774596
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Explainable Artificial Intelligence (XAI) is critical for attaining trust in the operation of AI systems. A key question of an AI system is ``why was this decision made this way''. Formal approaches to XAI use a formal model of the AI system to identify abductive explanations. While abductive explanations may be applicable to a large number of inputs sharing the same concrete values, more general explanations may be preferred for numeric inputs. So-called inflated abductive explanations give intervals for each feature ensuring that any input whose values fall withing these intervals is still guaranteed to make the same prediction. Inflated explanations cover a larger portion of the input space, and hence are deemed more general explanations. But there can be many (inflated) abductive explanations for an instance. Which is the best? In this paper, we show how to find a most general abductive explanation for an AI decision. This explanation covers as much of the input space as possible, while still being a correct formal explanation of the model's behaviour. Given that we only want to give a human one explanation for a decision, the most general explanation gives us the explanation with the broadest applicability, and hence the one most likely to seem sensible. (The paper has been accepted at IJCAI2025 conference.)
Related papers
- Axe the X in XAI: A Plea for Understandable AI [0.0]
I argue that the notion of explainability as it is currently used in the XAI literature bears little resemblance to the traditional concept of scientific explanation.
It would be more fruitful to use the label "understandable AI" to avoid the confusion that surrounds the goal and purposes of XAI.
arXiv Detail & Related papers (2024-03-01T06:28:53Z) - Delivering Inflated Explanations [17.646704122091087]
A formal approach to explainability builds a formal model of the AI system.
A formal abductive explanation is a set of features, such that if they take the given value will always lead to the same decision.
In this paper we define inflated explanations which is a set of features, and for each feature of set of values, such that the decision will remain unchanged.
arXiv Detail & Related papers (2023-06-27T07:54:18Z) - HOP, UNION, GENERATE: Explainable Multi-hop Reasoning without Rationale
Supervision [118.0818807474809]
This work proposes a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision.
Our approach performs multi-hop reasoning by explicitly modeling rationales as sets, enabling the model to capture interactions between documents and sentences within a document.
arXiv Detail & Related papers (2023-05-23T16:53:49Z) - Eliminating The Impossible, Whatever Remains Must Be True [46.39428193548396]
We show how one can apply background knowledge to give more succinct "why" formal explanations.
We also show how to use existing rule induction techniques to efficiently extract background information from a dataset.
arXiv Detail & Related papers (2022-06-20T03:18:14Z) - A psychological theory of explainability [5.715103211247915]
We propose a theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation.
Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give.
arXiv Detail & Related papers (2022-05-17T15:52:24Z) - Visual Abductive Reasoning [85.17040703205608]
Abductive reasoning seeks the likeliest possible explanation for partial observations.
We propose a new task and dataset, Visual Abductive Reasoning ( VAR), for examining abductive reasoning ability of machine intelligence in everyday visual situations.
arXiv Detail & Related papers (2022-03-26T10:17:03Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Explainable AI without Interpretable Model [0.0]
It has become more important than ever that AI systems would be able to explain the reasoning behind their results to end-users.
Most Explainable AI (XAI) methods are based on extracting an interpretable model that can be used for producing explanations.
The notions of Contextual Importance and Utility (CIU) presented in this paper make it possible to produce human-like explanations of black-box outcomes directly.
arXiv Detail & Related papers (2020-09-29T13:29:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.