Feature Importance versus Feature Influence and What It Signifies for
Explainable AI
- URL: http://arxiv.org/abs/2308.03589v1
- Date: Mon, 7 Aug 2023 13:46:18 GMT
- Title: Feature Importance versus Feature Influence and What It Signifies for
Explainable AI
- Authors: Kary Fr\"amling
- Abstract summary: Feature importance should not be confused with the feature influence used by most state-of-the-art post-hoc Explainable AI methods.
The Contextual Importance and Utility (CIU) method provides a unified definition of global and local feature importance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When used in the context of decision theory, feature importance expresses how
much changing the value of a feature can change the model outcome (or the
utility of the outcome), compared to other features. Feature importance should
not be confused with the feature influence used by most state-of-the-art
post-hoc Explainable AI methods. Contrary to feature importance, feature
influence is measured against a reference level or baseline. The Contextual
Importance and Utility (CIU) method provides a unified definition of global and
local feature importance that is applicable also for post-hoc explanations,
where the value utility concept provides instance-level assessment of how
favorable or not a feature value is for the outcome. The paper shows how CIU
can be applied to both global and local explainability, assesses the fidelity
and stability of different methods, and shows how explanations that use
contextual importance and contextual utility can provide more expressive and
flexible explanations than when using influence only.
Related papers
- Lost in Context: The Influence of Context on Feature Attribution Methods for Object Recognition [4.674826882670651]
This study investigates how context manipulation influences both model accuracy and feature attribution.
We employ a range of feature attribution techniques to decipher the reliance of deep neural networks on context in object recognition tasks.
arXiv Detail & Related papers (2024-11-05T06:13:01Z) - Introducing User Feedback-based Counterfactual Explanations (UFCE) [49.1574468325115]
Counterfactual explanations (CEs) have emerged as a viable solution for generating comprehensible explanations in XAI.
UFCE allows for the inclusion of user constraints to determine the smallest modifications in the subset of actionable features.
UFCE outperforms two well-known CE methods in terms of textitproximity, textitsparsity, and textitfeasibility.
arXiv Detail & Related papers (2024-02-26T20:09:44Z) - Context-LGM: Leveraging Object-Context Relation for Context-Aware Object
Recognition [48.5398871460388]
We propose a novel Contextual Latent Generative Model (Context-LGM), which considers the object-context relation and models it in a hierarchical manner.
To infer contextual features, we reformulate the objective function of Variational Auto-Encoder (VAE), where contextual features are learned as a posterior conditioned distribution on the object.
The effectiveness of our method is verified by state-of-the-art performance on two context-aware object recognition tasks.
arXiv Detail & Related papers (2021-10-08T11:31:58Z) - Direct Advantage Estimation [63.52264764099532]
We show that the expected return may depend on the policy in an undesirable way which could slow down learning.
We propose the Direct Advantage Estimation (DAE), a novel method that can model the advantage function and estimate it directly from data.
If desired, value functions can also be seamlessly integrated into DAE and be updated in a similar way to Temporal Difference Learning.
arXiv Detail & Related papers (2021-09-13T16:09:31Z) - Comparing interpretability and explainability for feature selection [0.6015898117103068]
We investigate the performance of variable importance as a feature selection method across various black-box and interpretable machine learning methods.
The results show that regardless of whether we use the native variable importance method or SHAP, XGBoost fails to clearly distinguish between relevant and irrelevant features.
arXiv Detail & Related papers (2021-05-11T20:01:23Z) - A-FMI: Learning Attributions from Deep Networks via Feature Map
Importance [58.708607977437794]
Gradient-based attribution methods can aid in the understanding of convolutional neural networks (CNNs)
The redundancy of attribution features and the gradient saturation problem are challenges that attribution methods still face.
We propose a new concept, feature map importance (FMI), to refine the contribution of each feature map, and a novel attribution method via FMI, to address the gradient saturation problem.
arXiv Detail & Related papers (2021-04-12T14:54:44Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Bayesian Importance of Features (BIF) [11.312036995195594]
We use the Dirichlet distribution to define the importance of input features and learn it via approximate Bayesian inference.
The learned importance has probabilistic interpretation and provides the relative significance of each input feature to a model's output.
We show the effectiveness of our method on a variety of synthetic and real datasets.
arXiv Detail & Related papers (2020-10-26T19:55:58Z) - Nonparametric Feature Impact and Importance [0.6123324869194193]
We give mathematical definitions of feature impact and importance, derived from partial dependence curves, that operate directly on the data.
To assess quality, we show that features ranked by these definitions are competitive with existing feature selection techniques.
arXiv Detail & Related papers (2020-06-08T17:07:35Z) - Explaining Black Box Predictions and Unveiling Data Artifacts through
Influence Functions [55.660255727031725]
Influence functions explain the decisions of a model by identifying influential training examples.
We conduct a comparison between influence functions and common word-saliency methods on representative tasks.
We develop a new measure based on influence functions that can reveal artifacts in training data.
arXiv Detail & Related papers (2020-05-14T00:45:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.