Model agnostic local variable importance for locally dependent relationships
- URL: http://arxiv.org/abs/2411.08821v1
- Date: Wed, 13 Nov 2024 17:59:44 GMT
- Title: Model agnostic local variable importance for locally dependent relationships
- Authors: Kelvyn K. Bladen, Adele Cutler, D. Richard Cutler, Kevin R. Moon,
- Abstract summary: We propose a new model-agnostic method for calculating local variable importance, CLIQUE, that captures locally dependent relationships.
We show that CLIQUE emphasizes locally dependent information and properly reduces bias in regions where variables do not affect the response.
- Score: 2.3374134413353254
- License:
- Abstract: Global variable importance measures are commonly used to interpret machine learning model results. Local variable importance techniques assess how variables contribute to individual observations rather than the entire dataset. Current methods typically fail to accurately reflect locally dependent relationships between variables and instead focus on marginal importance values. Additionally, they are not natively adapted for multi-class classification problems. We propose a new model-agnostic method for calculating local variable importance, CLIQUE, that captures locally dependent relationships, contains improvements over permutation-based methods, and can be directly applied to multi-class classification problems. Simulated and real-world examples show that CLIQUE emphasizes locally dependent information and properly reduces bias in regions where variables do not affect the response.
Related papers
- Adaptive Global-Local Representation Learning and Selection for
Cross-Domain Facial Expression Recognition [54.334773598942775]
Domain shift poses a significant challenge in Cross-Domain Facial Expression Recognition (CD-FER)
We propose an Adaptive Global-Local Representation Learning and Selection framework.
arXiv Detail & Related papers (2024-01-20T02:21:41Z) - Variable Importance in High-Dimensional Settings Requires Grouping [19.095605415846187]
Conditional Permutation Importance (CPI) bypasses PI's limitations in such cases.
Grouping variables statistically via clustering or some prior knowledge gains some power back.
We show that the approach extended with stacking controls the type-I error even with highly-correlated groups.
arXiv Detail & Related papers (2023-12-18T00:21:47Z) - Think Twice Before Selection: Federated Evidential Active Learning for Medical Image Analysis with Domain Shifts [11.562953837452126]
We make the first attempt to assess the informativeness of local data derived from diverse domains.
We propose a novel methodology termed Federated Evidential Active Learning (FEAL) to calibrate the data evaluation under domain shift.
arXiv Detail & Related papers (2023-12-05T08:32:27Z) - Locally Adaptive and Differentiable Regression [10.194448186897906]
We propose a general framework to construct a global continuous and differentiable model based on a weighted average of locally learned models in corresponding local regions.
We demonstrate that when we mix kernel ridge and regression terms in the local models, and stitch them together continuously, we achieve faster statistical convergence in theory and improved performance in various practical settings.
arXiv Detail & Related papers (2023-08-14T19:12:40Z) - Beyond Normal: On the Evaluation of Mutual Information Estimators [52.85079110699378]
We show how to construct a diverse family of distributions with known ground-truth mutual information.
We provide guidelines for practitioners on how to select appropriate estimator adapted to the difficulty of problem considered.
arXiv Detail & Related papers (2023-06-19T17:26:34Z) - Change Detection for Local Explainability in Evolving Data Streams [72.4816340552763]
Local feature attribution methods have become a popular technique for post-hoc and model-agnostic explanations.
It is often unclear how local attributions behave in realistic, constantly evolving settings such as streaming and online applications.
We present CDLEEDS, a flexible and model-agnostic framework for detecting local change and concept drift.
arXiv Detail & Related papers (2022-09-06T18:38:34Z) - Predicting Out-of-Domain Generalization with Neighborhood Invariance [59.05399533508682]
We propose a measure of a classifier's output invariance in a local transformation neighborhood.
Our measure is simple to calculate, does not depend on the test point's true label, and can be applied even in out-of-domain (OOD) settings.
In experiments on benchmarks in image classification, sentiment analysis, and natural language inference, we demonstrate a strong and robust correlation between our measure and actual OOD generalization.
arXiv Detail & Related papers (2022-07-05T14:55:16Z) - State, global and local parameter estimation using local ensemble Kalman
filters: applications to online machine learning of chaotic dynamics [0.0]
We more systematically investigate the possibilty to use a local ensemble Kalman filter with either covariance localization or local domains.
Global parameters are meant to represent the surrogate dynamics, while the local parameters typically stand for the forcings of the model.
This paper more generally addresses the key question of online estimation of both global and local model parameters.
arXiv Detail & Related papers (2021-07-23T14:12:20Z) - Triplot: model agnostic measures and visualisations for variable
importance in predictive models that take into account the hierarchical
correlation structure [3.0036519884678894]
We propose new methods to support model analysis by exploiting the information about the correlation between variables.
We show how to analyze groups of variables (aspects) both when they are proposed by the user and when they should be determined automatically.
We also present the new type of model visualisation, triplot, which exploits a hierarchical structure of variable grouping to produce a high information density model visualisation.
arXiv Detail & Related papers (2021-04-07T21:29:03Z) - LOGAN: Local Group Bias Detection by Clustering [86.38331353310114]
We argue that evaluating bias at the corpus level is not enough for understanding how biases are embedded in a model.
We propose LOGAN, a new bias detection technique based on clustering.
Experiments on toxicity classification and object classification tasks show that LOGAN identifies bias in a local region.
arXiv Detail & Related papers (2020-10-06T16:42:51Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.