Explanatory causal effects for model agnostic explanations
- URL: http://arxiv.org/abs/2206.11529v1
- Date: Thu, 23 Jun 2022 08:25:31 GMT
- Title: Explanatory causal effects for model agnostic explanations
- Authors: Jiuyong Li and Ha Xuan Tran and Thuc Duy Le and Lin Liu and Kui Yu and
Jixue Liu
- Abstract summary: We study the problem of estimating the contributions of features to the prediction of a specific instance by a machine learning model.
A challenge is that most existing causal effects cannot be estimated from data without a known causal graph.
We define an explanatory causal effect based on a hypothetical ideal experiment.
- Score: 27.129579550130423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies the problem of estimating the contributions of features to
the prediction of a specific instance by a machine learning model and the
overall contribution of a feature to the model. The causal effect of a feature
(variable) on the predicted outcome reflects the contribution of the feature to
a prediction very well. A challenge is that most existing causal effects cannot
be estimated from data without a known causal graph. In this paper, we define
an explanatory causal effect based on a hypothetical ideal experiment. The
definition brings several benefits to model agnostic explanations. First,
explanations are transparent and have causal meanings. Second, the explanatory
causal effect estimation can be data driven. Third, the causal effects provide
both a local explanation for a specific prediction and a global explanation
showing the overall importance of a feature in a predictive model. We further
propose a method using individual and combined variables based on explanatory
causal effects for explanations. We show the definition and the method work
with experiments on some real-world data sets.
Related papers
- Counterfactual explainability of black-box prediction models [4.14360329494344]
We propose a new notion called counterfactual explainability for black-box prediction models.
Counterfactual explainability has three key advantages.
arXiv Detail & Related papers (2024-11-03T16:29:09Z) - Linking Model Intervention to Causal Interpretation in Model Explanation [34.21877996496178]
We will study the conditions when an intuitive model intervention effect has a causal interpretation.
This work links the model intervention effect to the causal interpretation of a model.
Experiments on semi-synthetic datasets have been conducted to validate theorems and show the potential for using the model intervention effect for model interpretation.
arXiv Detail & Related papers (2024-10-21T05:16:59Z) - Linking a predictive model to causal effect estimation [21.869233469885856]
This paper first tackles the challenge of estimating the causal effect of any feature (as the treatment) on the outcome w.r.t. a given instance.
The theoretical results naturally link a predictive model to causal effect estimations and imply that a predictive model is causally interpretable.
We use experiments to demonstrate that various types of predictive models, when satisfying the conditions identified in this paper, can estimate the causal effects of features as accurately as state-of-the-art causal effect estimation methods.
arXiv Detail & Related papers (2023-04-10T13:08:16Z) - CLEAR: Generative Counterfactual Explanations on Graphs [60.30009215290265]
We study the problem of counterfactual explanation generation on graphs.
A few studies have explored counterfactual explanations on graphs, but many challenges of this problem are still not well-addressed.
We propose a novel framework CLEAR which aims to generate counterfactual explanations on graphs for graph-level prediction models.
arXiv Detail & Related papers (2022-10-16T04:35:32Z) - Active Bayesian Causal Inference [72.70593653185078]
We propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning.
ABCI jointly infers a posterior over causal models and queries of interest.
We show that our approach is more data-efficient than several baselines that only focus on learning the full causal graph.
arXiv Detail & Related papers (2022-06-04T22:38:57Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Deconfounding to Explanation Evaluation in Graph Neural Networks [136.73451468551656]
We argue that a distribution shift exists between the full graph and the subgraph, causing the out-of-distribution problem.
We propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
arXiv Detail & Related papers (2022-01-21T18:05:00Z) - On Shapley Credit Allocation for Interpretability [1.52292571922932]
We emphasize the importance of asking the right question when interpreting the decisions of a learning model.
This paper quantifies feature relevance by weaving different natures of interpretations together with different measures as characteristic functions for Shapley symmetrization.
arXiv Detail & Related papers (2020-12-10T08:25:32Z) - Debiasing Concept-based Explanations with Causal Analysis [4.911435444514558]
We study the problem of the concepts being correlated with confounding information in the features.
We propose a new causal prior graph for modeling the impacts of unobserved variables.
We show that our debiasing method works when the concepts are not complete.
arXiv Detail & Related papers (2020-07-22T15:42:46Z) - Causal Discovery in Physical Systems from Videos [123.79211190669821]
Causal discovery is at the core of human cognition.
We consider the task of causal discovery from videos in an end-to-end fashion without supervision on the ground-truth graph structure.
arXiv Detail & Related papers (2020-07-01T17:29:57Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.