GLEAMS: Bridging the Gap Between Local and Global Explanations
- URL: http://arxiv.org/abs/2408.05060v1
- Date: Fri, 9 Aug 2024 13:30:37 GMT
- Title: GLEAMS: Bridging the Gap Between Local and Global Explanations
- Authors: Giorgio Visani, Vincenzo Stanzione, Damien Garreau,
- Abstract summary: We propose GLEAMS, a novel method that partitions the input space and learns an interpretable model within each sub-region.
We demonstrate GLEAMS' effectiveness on both synthetic and real-world data, highlighting its desirable properties and human-understandable insights.
- Score: 6.329021279685856
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The explainability of machine learning algorithms is crucial, and numerous methods have emerged recently. Local, post-hoc methods assign an attribution score to each feature, indicating its importance for the prediction. However, these methods require recalculating explanations for each example. On the other side, while there exist global approaches they often produce explanations that are either overly simplistic and unreliable or excessively complex. To bridge this gap, we propose GLEAMS, a novel method that partitions the input space and learns an interpretable model within each sub-region, thereby providing both faithful local and global surrogates. We demonstrate GLEAMS' effectiveness on both synthetic and real-world data, highlighting its desirable properties and human-understandable insights.
Related papers
- Enhancing Model Interpretability with Local Attribution over Global Exploration [6.3144983055172235]
Current attribution algorithms evaluate the importance of each parameter by exploring the sample space.
A large number of intermediate states are introduced during the exploration process, which may reach the model's Out-of-Distribution (OOD) space.
We propose the Local Attribution (LA) algorithm that leverages these properties.
Compared to the state-of-the-art attribution methods, our approach achieves an average improvement of 38.21% in attribution effectiveness.
arXiv Detail & Related papers (2024-08-14T17:53:08Z) - Local vs. Global Interpretability: A Computational Complexity Perspective [0.9558392439655016]
We use computational complexity theory to assess local and global perspectives of interpreting ML models.
Our findings offer insights into both the local and global interpretability of these models.
We believe that our findings demonstrate how examining explainability through a computational complexity lens can help us develop a more rigorous grasp of the inherent interpretability of ML models.
arXiv Detail & Related papers (2024-06-05T06:23:49Z) - Sparsity-Guided Holistic Explanation for LLMs with Interpretable
Inference-Time Intervention [53.896974148579346]
Large Language Models (LLMs) have achieved unprecedented breakthroughs in various natural language processing domains.
The enigmatic black-box'' nature of LLMs remains a significant challenge for interpretability, hampering transparent and accountable applications.
We propose a novel methodology anchored in sparsity-guided techniques, aiming to provide a holistic interpretation of LLMs.
arXiv Detail & Related papers (2023-12-22T19:55:58Z) - GLOBE-CE: A Translation-Based Approach for Global Counterfactual
Explanations [10.276136171459731]
Global & Efficient Counterfactual Explanations (GLOBE-CE) is a flexible framework that tackles the reliability and scalability issues associated with current state-of-the-art.
We provide a unique mathematical analysis of categorical feature translations, utilising it in our method.
Experimental evaluation with publicly available datasets and user studies demonstrate that GLOBE-CE performs significantly better than the current state-of-the-art.
arXiv Detail & Related papers (2023-05-26T15:26:59Z) - Coalescing Global and Local Information for Procedural Text
Understanding [70.10291759879887]
A complete procedural understanding solution should combine three core aspects: local and global views of the inputs, and global view of outputs.
In this paper, we propose Coalescing Global and Local InformationCG, a new model that builds entity and time representations.
Experiments on a popular procedural text understanding dataset show that our model achieves state-of-the-art results.
arXiv Detail & Related papers (2022-08-26T19:16:32Z) - Global Counterfactual Explanations: Investigations, Implementations and
Improvements [12.343333815270402]
Actionable Recourse Summaries (AReS) is the only known global counterfactual explanation framework for recourse.
This paper focuses on implementing and improving AReS, the only known global counterfactual explanation framework for recourse.
arXiv Detail & Related papers (2022-04-14T12:21:23Z) - An Entropy-guided Reinforced Partial Convolutional Network for Zero-Shot
Learning [77.72330187258498]
We propose a novel Entropy-guided Reinforced Partial Convolutional Network (ERPCNet)
ERPCNet extracts and aggregates localities based on semantic relevance and visual correlations without human-annotated regions.
It not only discovers global-cooperative localities dynamically but also converges faster for policy gradient optimization.
arXiv Detail & Related papers (2021-11-03T11:13:13Z) - Partial Order in Chaos: Consensus on Feature Attributions in the
Rashomon Set [50.67431815647126]
Post-hoc global/local feature attribution methods are being progressively employed to understand machine learning models.
We show that partial orders of local/global feature importance arise from this methodology.
We show that every relation among features present in these partial orders also holds in the rankings provided by existing approaches.
arXiv Detail & Related papers (2021-10-26T02:53:14Z) - Best of both worlds: local and global explanations with
human-understandable concepts [10.155485106226754]
Interpretability techniques aim to provide the rationale behind a model's decision, typically by explaining either an individual prediction or a class of predictions.
We show that our method improves global explanations over TCAV when compared to ground truth, and provides useful insights.
arXiv Detail & Related papers (2021-06-16T09:05:25Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.