Discriminative Feature Attributions: Bridging Post Hoc Explainability
and Inherent Interpretability
- URL: http://arxiv.org/abs/2307.15007v2
- Date: Thu, 15 Feb 2024 20:10:47 GMT
- Title: Discriminative Feature Attributions: Bridging Post Hoc Explainability
and Inherent Interpretability
- Authors: Usha Bhalla, Suraj Srinivas, Himabindu Lakkaraju
- Abstract summary: Post hoc explanations incorrectly attribute high importance to features that are unimportant or non-discriminative for the underlying task.
Inherently interpretable models, on the other hand, circumvent these issues by explicitly encoding explanations into model architecture.
We propose Distractor Erasure Tuning (DiET), a method that adapts black-box models to be robust to distractor erasure.
- Score: 29.459228981179674
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the increased deployment of machine learning models in various
real-world applications, researchers and practitioners alike have emphasized
the need for explanations of model behaviour. To this end, two broad strategies
have been outlined in prior literature to explain models. Post hoc explanation
methods explain the behaviour of complex black-box models by identifying
features critical to model predictions; however, prior work has shown that
these explanations may not be faithful, in that they incorrectly attribute high
importance to features that are unimportant or non-discriminative for the
underlying task. Inherently interpretable models, on the other hand, circumvent
these issues by explicitly encoding explanations into model architecture,
meaning their explanations are naturally faithful, but they often exhibit poor
predictive performance due to their limited expressive power. In this work, we
identify a key reason for the lack of faithfulness of feature attributions: the
lack of robustness of the underlying black-box models, especially to the
erasure of unimportant distractor features in the input. To address this issue,
we propose Distractor Erasure Tuning (DiET), a method that adapts black-box
models to be robust to distractor erasure, thus providing discriminative and
faithful attributions. This strategy naturally combines the ease of use of post
hoc explanations with the faithfulness of inherently interpretable models. We
perform extensive experiments on semi-synthetic and real-world datasets and
show that DiET produces models that (1) closely approximate the original
black-box models they are intended to explain, and (2) yield explanations that
match approximate ground truths available by construction. Our code is made
public at https://github.com/AI4LIFE-GROUP/DiET.
Related papers
- DISCRET: Synthesizing Faithful Explanations For Treatment Effect Estimation [21.172795461188578]
We propose DISCRET, a self-interpretable ITE framework that synthesizes faithful, rule-based explanations for each sample.
A key insight behind DISCRET is that explanations can serve dually as database queries to identify similar subgroups of samples.
We provide a novel RL algorithm to efficiently synthesize these explanations from a large search space.
arXiv Detail & Related papers (2024-06-02T04:01:08Z) - Black-Box Tuning of Vision-Language Models with Effective Gradient
Approximation [71.21346469382821]
We introduce collaborative black-box tuning (CBBT) for both textual prompt optimization and output feature adaptation for black-box models.
CBBT is extensively evaluated on eleven downstream benchmarks and achieves remarkable improvements compared to existing black-box VL adaptation methods.
arXiv Detail & Related papers (2023-12-26T06:31:28Z) - Faithful Model Explanations through Energy-Constrained Conformal
Counterfactuals [16.67633872254042]
Counterfactual explanations offer an intuitive and straightforward way to explain black-box models.
Existing work has primarily relied on surrogate models to learn how the input data is distributed.
We propose a novel algorithmic framework for generating Energy-Constrained Conformal Counterfactuals that are only as plausible as the model permits.
arXiv Detail & Related papers (2023-12-17T08:24:44Z) - BELLA: Black box model Explanations by Local Linear Approximations [10.05944106581306]
We present BELLA, a deterministic model-agnostic post-hoc approach for explaining the individual predictions of regression black-box models.
BELLA provides explanations in the form of a linear model trained in the feature space.
BELLA can produce both factual and counterfactual explanations.
arXiv Detail & Related papers (2023-05-18T21:22:23Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z) - Model extraction from counterfactual explanations [68.8204255655161]
We show how an adversary can leverage the information provided by counterfactual explanations to build high-fidelity and high-accuracy model extraction attacks.
Our attack enables the adversary to build a faithful copy of a target model by accessing its counterfactual explanations.
arXiv Detail & Related papers (2020-09-03T19:02:55Z) - Explainable Deep Modeling of Tabular Data using TableGraphNet [1.376408511310322]
We propose a new architecture that produces explainable predictions in the form of additive feature attributions.
We show that our explainable model attains the same level of performance as black box models.
arXiv Detail & Related papers (2020-02-12T20:02:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.