Disentangling Interactions and Dependencies in Feature Attribution
- URL: http://arxiv.org/abs/2410.23772v1
- Date: Thu, 31 Oct 2024 09:41:10 GMT
- Title: Disentangling Interactions and Dependencies in Feature Attribution
- Authors: Gunnar König, Eric Günther, Ulrike von Luxburg,
- Abstract summary: In machine learning, global feature importance methods try to determine how much each individual feature contributes to predicting a target variable.
In commonly used feature importance scores these cooperative effects are conflated with the features' individual contributions.
We derive DIP, a new mathematical decomposition of individual feature importance scores that disentangles three components.
- Score: 9.442326245744916
- License:
- Abstract: In explainable machine learning, global feature importance methods try to determine how much each individual feature contributes to predicting the target variable, resulting in one importance score for each feature. But often, predicting the target variable requires interactions between several features (such as in the XOR function), and features might have complex statistical dependencies that allow to partially replace one feature with another one. In commonly used feature importance scores these cooperative effects are conflated with the features' individual contributions, making them prone to misinterpretations. In this work, we derive DIP, a new mathematical decomposition of individual feature importance scores that disentangles three components: the standalone contribution and the contributions stemming from interactions and dependencies. We prove that the DIP decomposition is unique and show how it can be estimated in practice. Based on these results, we propose a new visualization of feature importance scores that clearly illustrates the different contributions.
Related papers
- A Unified Causal View of Instruction Tuning [76.1000380429553]
We develop a meta Structural Causal Model (meta-SCM) to integrate different NLP tasks under a single causal structure of the data.
Key idea is to learn task-required causal factors and only use those to make predictions for a given task.
arXiv Detail & Related papers (2024-02-09T07:12:56Z) - On the estimation of the number of components in multivariate functional principal component analysis [0.0]
We present extensive simulations to investigate choosing the number of principal components to retain.
We show empirically that the conventional approach of using a percentage of variance explained threshold for each univariate functional feature may be unreliable.
arXiv Detail & Related papers (2023-11-08T09:05:42Z) - Generalization Performance of Transfer Learning: Overparameterized and
Underparameterized Regimes [61.22448274621503]
In real-world applications, tasks often exhibit partial similarity, where certain aspects are similar while others are different or irrelevant.
Our study explores various types of transfer learning, encompassing two options for parameter transfer.
We provide practical guidelines for determining the number of features in the common and task-specific parts for improved generalization performance.
arXiv Detail & Related papers (2023-06-08T03:08:40Z) - On the Joint Interaction of Models, Data, and Features [82.60073661644435]
We introduce a new tool, the interaction tensor, for empirically analyzing the interaction between data and model through features.
Based on these observations, we propose a conceptual framework for feature learning.
Under this framework, the expected accuracy for a single hypothesis and agreement for a pair of hypotheses can both be derived in closed-form.
arXiv Detail & Related papers (2023-06-07T21:35:26Z) - Relational Local Explanations [11.679389861042]
We develop a novel model-agnostic and permutation-based feature attribution algorithm based on relational analysis between input variables.
We are able to gain a broader insight into machine learning model decisions and data.
arXiv Detail & Related papers (2022-12-23T14:46:23Z) - Multi-task Bias-Variance Trade-off Through Functional Constraints [102.64082402388192]
Multi-task learning aims to acquire a set of functions that perform well for diverse tasks.
In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks.
We introduce a constrained learning formulation that enforces domain specific solutions to a central function.
arXiv Detail & Related papers (2022-10-27T16:06:47Z) - Grouped Feature Importance and Combined Features Effect Plot [2.15867006052733]
Interpretable machine learning has become a very active area of research due to the rising popularity of machine learning algorithms.
We provide a comprehensive overview of how existing model-agnostic techniques can be defined for feature groups to assess the grouped feature importance.
We introduce the combined features effect plot, which is a technique to visualize the effect of a group of features based on a sparse, interpretable linear combination of features.
arXiv Detail & Related papers (2021-04-23T16:27:38Z) - Interactive Fusion of Multi-level Features for Compositional Activity
Recognition [100.75045558068874]
We present a novel framework that accomplishes this goal by interactive fusion.
We implement the framework in three steps, namely, positional-to-appearance feature extraction, semantic feature interaction, and semantic-to-positional prediction.
We evaluate our approach on two action recognition datasets, Something-Something and Charades.
arXiv Detail & Related papers (2020-12-10T14:17:18Z) - Towards a More Reliable Interpretation of Machine Learning Outputs for
Safety-Critical Systems using Feature Importance Fusion [0.0]
We introduce a novel fusion metric and compare it to the state-of-the-art.
Our approach is tested on synthetic data, where the ground truth is known.
Results show that our feature importance ensemble Framework overall produces 15% less feature importance error compared to existing methods.
arXiv Detail & Related papers (2020-09-11T15:51:52Z) - Nonparametric Feature Impact and Importance [0.6123324869194193]
We give mathematical definitions of feature impact and importance, derived from partial dependence curves, that operate directly on the data.
To assess quality, we show that features ranked by these definitions are competitive with existing feature selection techniques.
arXiv Detail & Related papers (2020-06-08T17:07:35Z) - Self-Attention Attribution: Interpreting Information Interactions Inside
Transformer [89.21584915290319]
We propose a self-attention attribution method to interpret the information interactions inside Transformer.
We show that the attribution results can be used as adversarial patterns to implement non-targeted attacks towards BERT.
arXiv Detail & Related papers (2020-04-23T14:58:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.