On Computing Relevant Features for Explaining NBCs
- URL: http://arxiv.org/abs/2207.04748v1
- Date: Mon, 11 Jul 2022 10:12:46 GMT
- Title: On Computing Relevant Features for Explaining NBCs
- Authors: Yacine Izza and Joao Marques-Silva
- Abstract summary: It is the case that modelagnostic explainable AI (XAI) can produce incorrect explanations.
PI-explanations also exhibit important drawbacks, the most visible of which is arguably their size.
This paper investigates the complexity of computing sets of relevant features for Naive Bayes classifiers (NBCs) and shows that, in practice, these are easy to compute.
- Score: 5.71097144710995
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the progress observed with model-agnostic explainable AI (XAI), it is
the case that model-agnostic XAI can produce incorrect explanations. One
alternative are the so-called formal approaches to XAI, that include
PI-explanations. Unfortunately, PI-explanations also exhibit important
drawbacks, the most visible of which is arguably their size. The computation of
relevant features serves to trade off probabilistic precision for the number of
features in an explanation. However, even for very simple classifiers, the
complexity of computing sets of relevant features is prohibitive. This paper
investigates the computation of relevant sets for Naive Bayes Classifiers
(NBCs), and shows that, in practice, these are easy to compute. Furthermore,
the experiments confirm that succinct sets of relevant features can be obtained
with NBCs.
Related papers
- XForecast: Evaluating Natural Language Explanations for Time Series Forecasting [72.57427992446698]
Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions.
Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge.
evaluating forecast NLEs is difficult due to the complex causal relationships in time series data.
arXiv Detail & Related papers (2024-10-18T05:16:39Z) - Incremental XAI: Memorable Understanding of AI with Incremental Explanations [13.460427339680168]
We propose to provide more detailed explanations by leveraging the human cognitive capacity to accumulate knowledge by incrementally receiving more details.
We introduce Incremental XAI to automatically partition explanations for general and atypical instances.
Memorability is improved by reusing base factors and reducing the number of factors shown in atypical cases.
arXiv Detail & Related papers (2024-04-10T04:38:17Z) - Even-if Explanations: Formal Foundations, Priorities and Complexity [18.126159829450028]
We show that both linear and tree-based models are strictly more interpretable than neural networks.
We introduce a preference-based framework that enables users to personalize explanations based on their preferences.
arXiv Detail & Related papers (2024-01-17T11:38:58Z) - Tractable Bounding of Counterfactual Queries by Knowledge Compilation [51.47174989680976]
We discuss the problem of bounding partially identifiable queries, such as counterfactuals, in Pearlian structural causal models.
A recently proposed iterated EM scheme yields an inner approximation of those bounds by sampling the initialisation parameters.
We show how a single symbolic knowledge compilation allows us to obtain the circuit structure with symbolic parameters to be replaced by their actual values.
arXiv Detail & Related papers (2023-10-05T07:10:40Z) - From Robustness to Explainability and Back Again [0.685316573653194]
The paper addresses the limitation of scalability of formal explainability, and proposes novel algorithms for computing formal explanations.
The proposed algorithm computes explanations by answering instead a number of robustness queries, and such that the number of such queries is at most linear on the number of features.
The experiments validate the practical efficiency of the proposed approach.
arXiv Detail & Related papers (2023-06-05T17:21:05Z) - Logical Message Passing Networks with One-hop Inference on Atomic
Formulas [57.47174363091452]
We propose a framework for complex query answering that decomposes the Knowledge Graph embeddings from neural set operators.
On top of the query graph, we propose the Logical Message Passing Neural Network (LMPNN) that connects the local one-hop inferences on atomic formulas to the global logical reasoning.
Our approach yields the new state-of-the-art neural CQA model.
arXiv Detail & Related papers (2023-01-21T02:34:06Z) - On Computing Probabilistic Abductive Explanations [30.325691263226968]
The most widely studied explainable AI (XAI) approaches are unsound.
PI-explanations also exhibit important drawbacks, the most visible of which is arguably their size.
This paper investigates practical approaches for computing relevant sets for a number of widely used classifiers.
arXiv Detail & Related papers (2022-12-12T15:47:10Z) - Don't Explain Noise: Robust Counterfactuals for Randomized Ensembles [50.81061839052459]
We formalize the generation of robust counterfactual explanations as a probabilistic problem.
We show the link between the robustness of ensemble models and the robustness of base learners.
Our method achieves high robustness with only a small increase in the distance from counterfactual explanations to their initial observations.
arXiv Detail & Related papers (2022-05-27T17:28:54Z) - On the Tractability of SHAP Explanations [40.829629145230356]
SHAP explanations are a popular feature-attribution mechanism for explainable AI.
We show that the complexity of computing the SHAP explanation is the same as the complexity of computing the expected value of the model.
Going beyond fully-factorized distributions, we show that computing SHAP explanations is already intractable for a very simple setting.
arXiv Detail & Related papers (2020-09-18T05:48:15Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - A Theory of Usable Information Under Computational Constraints [103.5901638681034]
We propose a new framework for reasoning about information in complex systems.
Our foundation is based on a variational extension of Shannon's information theory.
We show that by incorporating computational constraints, $mathcalV$-information can be reliably estimated from data.
arXiv Detail & Related papers (2020-02-25T06:09:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.