Anytime Approximate Formal Feature Attribution
- URL: http://arxiv.org/abs/2312.06973v1
- Date: Tue, 12 Dec 2023 04:24:05 GMT
- Title: Anytime Approximate Formal Feature Attribution
- Authors: Jinqiang Yu, Graham Farr, Alexey Ignatiev, Peter J. Stuckey
- Abstract summary: Key explainability question is: given this decision was made, what are the input features which contributed to the decision?
Heuristic XAI approaches suffer from the lack of quality guarantees, and often try to approximate Shapley values, which is not the same as explaining which features contribute to a decision.
A recent alternative is so-called formal feature attribution (FFA), which defines feature importance as the fraction of formal abductive explanations (AXp's) containing the given feature.
- Score: 33.195028992904355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Widespread use of artificial intelligence (AI) algorithms and machine
learning (ML) models on the one hand and a number of crucial issues pertaining
to them warrant the need for explainable artificial intelligence (XAI). A key
explainability question is: given this decision was made, what are the input
features which contributed to the decision? Although a range of XAI approaches
exist to tackle this problem, most of them have significant limitations.
Heuristic XAI approaches suffer from the lack of quality guarantees, and often
try to approximate Shapley values, which is not the same as explaining which
features contribute to a decision. A recent alternative is so-called formal
feature attribution (FFA), which defines feature importance as the fraction of
formal abductive explanations (AXp's) containing the given feature. This
measures feature importance from the view of formally reasoning about the
model's behavior. It is challenging to compute FFA using its definition because
that involves counting AXp's, although one can approximate it. Based on these
results, this paper makes several contributions. First, it gives compelling
evidence that computing FFA is intractable, even if the set of contrastive
formal explanations (CXp's) is provided, by proving that the problem is
#P-hard. Second, by using the duality between AXp's and CXp's, it proposes an
efficient heuristic to switch from CXp enumeration to AXp enumeration
on-the-fly resulting in an adaptive explanation enumeration algorithm
effectively approximating FFA in an anytime fashion. Finally, experimental
results obtained on a range of widely used datasets demonstrate the
effectiveness of the proposed FFA approximation approach in terms of the error
of FFA approximation as well as the number of explanations computed and their
diversity given a fixed time limit.
Related papers
- Spurious Feature Eraser: Stabilizing Test-Time Adaptation for Vision-Language Foundation Model [86.9619638550683]
Vision-language foundation models have exhibited remarkable success across a multitude of downstream tasks due to their scalability on extensive image-text paired data.
However, these models display significant limitations when applied to downstream tasks, such as fine-grained image classification, as a result of decision shortcuts''
arXiv Detail & Related papers (2024-03-01T09:01:53Z) - Local Universal Explainer (LUX) -- a rule-based explainer with factual, counterfactual and visual explanations [7.673339435080445]
Local Universal Explainer (LUX) is a rule-based explainer that can generate factual, counterfactual and visual explanations.
It is based on a modified version of decision tree algorithms that allows for oblique splits and integration with feature importance XAI methods such as SHAP.
We tested our method on real and synthetic datasets and compared it with state-of-the-art rule-based explainers such as LORE, EXPLAN and Anchor.
arXiv Detail & Related papers (2023-10-23T13:04:15Z) - Ensemble of Counterfactual Explainers [17.88531216690148]
We propose an ensemble of counterfactual explainers that boosts weak explainers, which provide only a subset of such properties.
The ensemble runs weak explainers on a sample of instances and of features, and it combines their results by exploiting a diversity-driven selection function.
arXiv Detail & Related papers (2023-08-29T10:21:50Z) - On Formal Feature Attribution and Its Approximation [37.3078859524959]
This paper proposes a way to apply the apparatus of formal XAI to the case of feature attribution based on formal explanation enumeration.
Given the practical complexity of the problem, the paper then proposes an efficient technique for approximating exact FFA.
arXiv Detail & Related papers (2023-07-07T04:20:36Z) - On Computing Probabilistic Abductive Explanations [30.325691263226968]
The most widely studied explainable AI (XAI) approaches are unsound.
PI-explanations also exhibit important drawbacks, the most visible of which is arguably their size.
This paper investigates practical approaches for computing relevant sets for a number of widely used classifiers.
arXiv Detail & Related papers (2022-12-12T15:47:10Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Error-based Knockoffs Inference for Controlled Feature Selection [49.99321384855201]
We propose an error-based knockoff inference method by integrating the knockoff features, the error-based feature importance statistics, and the stepdown procedure together.
The proposed inference procedure does not require specifying a regression model and can handle feature selection with theoretical guarantees.
arXiv Detail & Related papers (2022-03-09T01:55:59Z) - Rational Shapley Values [0.0]
Most popular tools for post-hoc explainable artificial intelligence (XAI) are either insensitive to context or difficult to summarize.
I introduce emphrational Shapley values, a novel XAI method that synthesizes and extends these seemingly incompatible approaches.
I leverage tools from decision theory and causal modeling to formalize and implement a pragmatic approach that resolves a number of known challenges in XAI.
arXiv Detail & Related papers (2021-06-18T15:45:21Z) - Removing Bias in Multi-modal Classifiers: Regularization by Maximizing
Functional Entropies [88.0813215220342]
Some modalities can more easily contribute to the classification results than others.
We develop a method based on the log-Sobolev inequality, which bounds the functional entropy with the functional-Fisher-information.
On the two challenging multi-modal datasets VQA-CPv2 and SocialIQ, we obtain state-of-the-art results while more uniformly exploiting the modalities.
arXiv Detail & Related papers (2020-10-21T07:40:33Z) - Naive Feature Selection: a Nearly Tight Convex Relaxation for Sparse Naive Bayes [51.55826927508311]
We propose a sparse version of naive Bayes, which can be used for feature selection.
We prove that our convex relaxation bounds becomes tight as the marginal contribution of additional features decreases.
Both binary and multinomial sparse models are solvable in time almost linear in problem size.
arXiv Detail & Related papers (2019-05-23T19:30:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.