Variable Importance in High-Dimensional Settings Requires Grouping
- URL: http://arxiv.org/abs/2312.10858v1
- Date: Mon, 18 Dec 2023 00:21:47 GMT
- Title: Variable Importance in High-Dimensional Settings Requires Grouping
- Authors: Ahmad Chamma (1 and 2 and 3), Bertrand Thirion (1 and 2 and 3), Denis
A. Engemann (4) ((1) Inria, (2) Universite Paris Saclay, (3) CEA, (4) Roche
Pharma Research and Early Development, Neuroscience and Rare Diseases, Roche
Innovation Center Basel, F. Hoffmann-La Roche Ltd., Basel, Switzerland)
- Abstract summary: Conditional Permutation Importance (CPI) bypasses PI's limitations in such cases.
Grouping variables statistically via clustering or some prior knowledge gains some power back.
We show that the approach extended with stacking controls the type-I error even with highly-correlated groups.
- Score: 19.095605415846187
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explaining the decision process of machine learning algorithms is nowadays
crucial for both model's performance enhancement and human comprehension. This
can be achieved by assessing the variable importance of single variables, even
for high-capacity non-linear methods, e.g. Deep Neural Networks (DNNs). While
only removal-based approaches, such as Permutation Importance (PI), can bring
statistical validity, they return misleading results when variables are
correlated. Conditional Permutation Importance (CPI) bypasses PI's limitations
in such cases. However, in high-dimensional settings, where high correlations
between the variables cancel their conditional importance, the use of CPI as
well as other methods leads to unreliable results, besides prohibitive
computation costs. Grouping variables statistically via clustering or some
prior knowledge gains some power back and leads to better interpretations. In
this work, we introduce BCPI (Block-Based Conditional Permutation Importance),
a new generic framework for variable importance computation with statistical
guarantees handling both single and group cases. Furthermore, as handling
groups with high cardinality (such as a set of observations of a given
modality) are both time-consuming and resource-intensive, we also introduce a
new stacking approach extending the DNN architecture with sub-linear layers
adapted to the group structure. We show that the ensuing approach extended with
stacking controls the type-I error even with highly-correlated groups and shows
top accuracy across benchmarks. Furthermore, we perform a real-world data
analysis in a large-scale medical dataset where we aim to show the consistency
between our results and the literature for a biomarker prediction.
Related papers
- Generative Principal Component Regression via Variational Inference [2.4415762506639944]
One approach to designing appropriate manipulations is to target key features of predictive models.
We develop a novel objective based on supervised variational autoencoders (SVAEs) that enforces such information is represented in the latent space.
We show in simulations that gPCR dramatically improves target selection in manipulation as compared to standard PCR and SVAEs.
arXiv Detail & Related papers (2024-09-03T22:38:55Z) - Targeted Cause Discovery with Data-Driven Learning [66.86881771339145]
We propose a novel machine learning approach for inferring causal variables of a target variable from observations.
We employ a neural network trained to identify causality through supervised learning on simulated data.
Empirical results demonstrate the effectiveness of our method in identifying causal relationships within large-scale gene regulatory networks.
arXiv Detail & Related papers (2024-08-29T02:21:11Z) - CAVIAR: Categorical-Variable Embeddings for Accurate and Robust Inference [0.2209921757303168]
Social science research often hinges on the relationship between categorical variables and outcomes.
We introduce CAVIAR, a novel method for embedding categorical variables that assume values in a high-dimensional ambient space but are sampled from an underlying manifold.
arXiv Detail & Related papers (2024-04-07T14:47:07Z) - Statistically Valid Variable Importance Assessment through Conditional
Permutations [19.095605415846187]
Conditional Permutation Importance is a new approach to variable importance assessment.
We show that $textitCPI$ overcomes the limitations of standard permutation importance by providing accurate type-I error control.
Our results suggest that $textitCPI$ can be readily used as drop-in replacement for permutation-based methods.
arXiv Detail & Related papers (2023-09-14T10:53:36Z) - ER: Equivariance Regularizer for Knowledge Graph Completion [107.51609402963072]
We propose a new regularizer, namely, Equivariance Regularizer (ER)
ER can enhance the generalization ability of the model by employing the semantic equivariance between the head and tail entities.
The experimental results indicate a clear and substantial improvement over the state-of-the-art relation prediction methods.
arXiv Detail & Related papers (2022-06-24T08:18:05Z) - Determination of class-specific variables in nonparametric
multiple-class classification [0.0]
We propose a probability-based nonparametric multiple-class classification method, and integrate it with the ability of identifying high impact variables for individual class.
We report the properties of the proposed method, and use both synthesized and real data sets to illustrate its properties under different classification situations.
arXiv Detail & Related papers (2022-05-07T10:08:58Z) - Equivariance Allows Handling Multiple Nuisance Variables When Analyzing
Pooled Neuroimaging Datasets [53.34152466646884]
In this paper, we show how bringing recent results on equivariant representation learning instantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
We demonstrate how our model allows dealing with more than one nuisance variable under some assumptions and can enable analysis of pooled scientific datasets in scenarios that would otherwise entail removing a large portion of the samples.
arXiv Detail & Related papers (2022-03-29T04:54:06Z) - Examining and Combating Spurious Features under Distribution Shift [94.31956965507085]
We define and analyze robust and spurious representations using the information-theoretic concept of minimal sufficient statistics.
We prove that even when there is only bias of the input distribution, models can still pick up spurious features from their training data.
Inspired by our analysis, we demonstrate that group DRO can fail when groups do not directly account for various spurious correlations.
arXiv Detail & Related papers (2021-06-14T05:39:09Z) - Triplot: model agnostic measures and visualisations for variable
importance in predictive models that take into account the hierarchical
correlation structure [3.0036519884678894]
We propose new methods to support model analysis by exploiting the information about the correlation between variables.
We show how to analyze groups of variables (aspects) both when they are proposed by the user and when they should be determined automatically.
We also present the new type of model visualisation, triplot, which exploits a hierarchical structure of variable grouping to produce a high information density model visualisation.
arXiv Detail & Related papers (2021-04-07T21:29:03Z) - Generalized Matrix Factorization: efficient algorithms for fitting
generalized linear latent variable models to large data arrays [62.997667081978825]
Generalized Linear Latent Variable models (GLLVMs) generalize such factor models to non-Gaussian responses.
Current algorithms for estimating model parameters in GLLVMs require intensive computation and do not scale to large datasets.
We propose a new approach for fitting GLLVMs to high-dimensional datasets, based on approximating the model using penalized quasi-likelihood.
arXiv Detail & Related papers (2020-10-06T04:28:19Z) - Repulsive Mixture Models of Exponential Family PCA for Clustering [127.90219303669006]
The mixture extension of exponential family principal component analysis ( EPCA) was designed to encode much more structural information about data distribution than the traditional EPCA.
The traditional mixture of local EPCAs has the problem of model redundancy, i.e., overlaps among mixing components, which may cause ambiguity for data clustering.
In this paper, a repulsiveness-encouraging prior is introduced among mixing components and a diversified EPCA mixture (DEPCAM) model is developed in the Bayesian framework.
arXiv Detail & Related papers (2020-04-07T04:07:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.