Sum-of-Parts: Faithful Attributions for Groups of Features
- URL: http://arxiv.org/abs/2310.16316v2
- Date: Wed, 02 Oct 2024 23:37:28 GMT
- Title: Sum-of-Parts: Faithful Attributions for Groups of Features
- Authors: Weiqiu You, Helen Qu, Marco Gatti, Bhuvnesh Jain, Eric Wong,
- Abstract summary: Sum-of-Parts ( SOP) is a framework that transforms any differentiable model into a self-explaining model whose predictions can be attributed to groups of features.
SOP achieves highest performance while also scoring high with respect to faithfulness metrics on ImageNet and CosmoGrid.
We validate the usefulness of the groups learned by SOP through their high purity, strong human distinction ability, and practical utility in scientific discovery.
- Score: 8.68707471649733
- License:
- Abstract: Feature attributions explain machine learning predictions by assigning importance scores to input features. While faithful attributions accurately reflect feature contributions to the model's prediction, unfaithful ones can lead to misleading interpretations, making them unreliable in high-stake domains. The challenge of unfaithfulness of post-hoc attributions led to the development of self-explaining models. However, self-explaining models often trade-off performance for interpretability. In this work, we develop Sum-of-Parts (SOP), a new framework that transforms any differentiable model into a self-explaining model whose predictions can be attributed to groups of features. The SOP framework leverages pretrained deep learning models with custom attention modules to learn useful feature groups end-to-end without direct supervision. With these capabilities, SOP achieves highest performance while also scoring high with respect to faithfulness metrics on both ImageNet and CosmoGrid. We validate the usefulness of the groups learned by SOP through their high purity, strong human distinction ability, and practical utility in scientific discovery. In a case study, we show how SOP assists cosmologists in uncovering new insights about galaxy formation.
Related papers
- Explainability of Highly Associated Fuzzy Churn Patterns in Binary Classification [21.38368444137596]
This study emphasizes the importance of identifying multivariate patterns and setting soft bounds for intuitive interpretation.
The main objective is to use a machine learning model and fuzzy-set theory with top-textitk HUIM to identify highly associated patterns of customer churn.
As a result, the study introduces an innovative approach that improves the explainability and effectiveness of customer churn prediction models.
arXiv Detail & Related papers (2024-10-21T09:44:37Z) - Evaluating and Explaining Large Language Models for Code Using Syntactic
Structures [74.93762031957883]
This paper introduces ASTxplainer, an explainability method specific to Large Language Models for code.
At its core, ASTxplainer provides an automated method for aligning token predictions with AST nodes.
We perform an empirical evaluation on 12 popular LLMs for code using a curated dataset of the most popular GitHub projects.
arXiv Detail & Related papers (2023-08-07T18:50:57Z) - On the Joint Interaction of Models, Data, and Features [82.60073661644435]
We introduce a new tool, the interaction tensor, for empirically analyzing the interaction between data and model through features.
Based on these observations, we propose a conceptual framework for feature learning.
Under this framework, the expected accuracy for a single hypothesis and agreement for a pair of hypotheses can both be derived in closed-form.
arXiv Detail & Related papers (2023-06-07T21:35:26Z) - Post Hoc Explanations of Language Models Can Improve Language Models [43.2109029463221]
We present a novel framework, Amplifying Model Performance by Leveraging In-Context Learning with Post Hoc Explanations (AMPLIFY)
We leverage post hoc explanation methods which output attribution scores (explanations) capturing the influence of each of the input features on model predictions.
Our framework, AMPLIFY, leads to prediction accuracy improvements of about 10-25% over a wide range of tasks.
arXiv Detail & Related papers (2023-05-19T04:46:04Z) - NxPlain: Web-based Tool for Discovery of Latent Concepts [16.446370662629555]
We present NxPlain, a web application that provides an explanation of a model's prediction using latent concepts.
NxPlain discovers latent concepts learned in a deep NLP model, provides an interpretation of the knowledge learned in the model, and explains its predictions based on the used concepts.
arXiv Detail & Related papers (2023-03-06T10:45:24Z) - Measuring the Driving Forces of Predictive Performance: Application to
Credit Scoring [0.0]
In credit scoring, machine learning models are known to outperform standard parametric models.
We introduce the XPER methodology to decompose a performance metric into contributions associated with a model.
We show that a small number of features can explain a surprisingly large part of the model performance.
arXiv Detail & Related papers (2022-12-12T13:09:46Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Partial Order in Chaos: Consensus on Feature Attributions in the
Rashomon Set [50.67431815647126]
Post-hoc global/local feature attribution methods are being progressively employed to understand machine learning models.
We show that partial orders of local/global feature importance arise from this methodology.
We show that every relation among features present in these partial orders also holds in the rankings provided by existing approaches.
arXiv Detail & Related papers (2021-10-26T02:53:14Z) - Interpreting and improving deep-learning models with reality checks [13.287382944078562]
This chapter covers recent work aiming to interpret models by attributing importance to features and feature groups for a single prediction.
We show how these attributions can be used to directly improve the generalization of a neural network or to distill it into a simple model.
arXiv Detail & Related papers (2021-08-16T00:58:15Z) - Interpretable Learning-to-Rank with Generalized Additive Models [78.42800966500374]
Interpretability of learning-to-rank models is a crucial yet relatively under-examined research area.
Recent progress on interpretable ranking models largely focuses on generating post-hoc explanations for existing black-box ranking models.
We lay the groundwork for intrinsically interpretable learning-to-rank by introducing generalized additive models (GAMs) into ranking tasks.
arXiv Detail & Related papers (2020-05-06T01:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.