DISCRET: Synthesizing Faithful Explanations For Treatment Effect Estimation
- URL: http://arxiv.org/abs/2406.00611v1
- Date: Sun, 2 Jun 2024 04:01:08 GMT
- Title: DISCRET: Synthesizing Faithful Explanations For Treatment Effect Estimation
- Authors: Yinjun Wu, Mayank Keoliya, Kan Chen, Neelay Velingker, Ziyang Li, Emily J Getzen, Qi Long, Mayur Naik, Ravi B Parikh, Eric Wong,
- Abstract summary: We propose DISCRET, a self-interpretable ITE framework that synthesizes faithful, rule-based explanations for each sample.
A key insight behind DISCRET is that explanations can serve dually as database queries to identify similar subgroups of samples.
We provide a novel RL algorithm to efficiently synthesize these explanations from a large search space.
- Score: 21.172795461188578
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Designing faithful yet accurate AI models is challenging, particularly in the field of individual treatment effect estimation (ITE). ITE prediction models deployed in critical settings such as healthcare should ideally be (i) accurate, and (ii) provide faithful explanations. However, current solutions are inadequate: state-of-the-art black-box models do not supply explanations, post-hoc explainers for black-box models lack faithfulness guarantees, and self-interpretable models greatly compromise accuracy. To address these issues, we propose DISCRET, a self-interpretable ITE framework that synthesizes faithful, rule-based explanations for each sample. A key insight behind DISCRET is that explanations can serve dually as database queries to identify similar subgroups of samples. We provide a novel RL algorithm to efficiently synthesize these explanations from a large search space. We evaluate DISCRET on diverse tasks involving tabular, image, and text data. DISCRET outperforms the best self-interpretable models and has accuracy comparable to the best black-box models while providing faithful explanations. DISCRET is available at https://github.com/wuyinjun-1993/DISCRET-ICML2024.
Related papers
- Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Models [21.698201509643624]
Self-interpretable models, such as concept-based networks, offer insights by connecting decisions to human-understandable concepts.
Post-hoc methods like Shapley values, while theoretically robust, are computationally expensive and resource-intensive.
We propose a novel method that combines their strengths, providing theoretically guaranteed self-interpretability for black-box models.
arXiv Detail & Related papers (2024-10-29T07:35:33Z) - Discriminative Feature Attributions: Bridging Post Hoc Explainability
and Inherent Interpretability [29.459228981179674]
Post hoc explanations incorrectly attribute high importance to features that are unimportant or non-discriminative for the underlying task.
Inherently interpretable models, on the other hand, circumvent these issues by explicitly encoding explanations into model architecture.
We propose Distractor Erasure Tuning (DiET), a method that adapts black-box models to be robust to distractor erasure.
arXiv Detail & Related papers (2023-07-27T17:06:02Z) - Preserving Knowledge Invariance: Rethinking Robustness Evaluation of
Open Information Extraction [50.62245481416744]
We present the first benchmark that simulates the evaluation of open information extraction models in the real world.
We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique.
By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques.
arXiv Detail & Related papers (2023-05-23T12:05:09Z) - BELLA: Black box model Explanations by Local Linear Approximations [10.05944106581306]
We present BELLA, a deterministic model-agnostic post-hoc approach for explaining the individual predictions of regression black-box models.
BELLA provides explanations in the form of a linear model trained in the feature space.
BELLA can produce both factual and counterfactual explanations.
arXiv Detail & Related papers (2023-05-18T21:22:23Z) - VisFIS: Visual Feature Importance Supervision with
Right-for-the-Right-Reason Objectives [84.48039784446166]
We show that model FI supervision can meaningfully improve VQA model accuracy as well as performance on several Right-for-the-Right-Reason metrics.
Our best performing method, Visual Feature Importance Supervision (VisFIS), outperforms strong baselines on benchmark VQA datasets.
Predictions are more accurate when explanations are plausible and faithful, and not when they are plausible but not faithful.
arXiv Detail & Related papers (2022-06-22T17:02:01Z) - Interpretable Mixture of Experts [71.55701784196253]
Interpretable Mixture of Experts (IME) is an inherently-interpretable modeling framework.
IME is demonstrated to be more accurate than single interpretable models and perform comparably with existing state-of-the-art Deep Neural Networks (DNNs) in accuracy.
IME's explanations are compared to commonly-used post-hoc explanations methods through a user study.
arXiv Detail & Related papers (2022-06-05T06:40:15Z) - The Road to Explainability is Paved with Bias: Measuring the Fairness of
Explanations [30.248116795946977]
Post-hoc explainability methods are often proposed to help users trust model predictions.
We use real data from four settings in finance, healthcare, college admissions, and the US justice system.
We find that the approximation quality of explanation models, also known as the fidelity, differs significantly between subgroups.
arXiv Detail & Related papers (2022-05-06T15:23:32Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Model extraction from counterfactual explanations [68.8204255655161]
We show how an adversary can leverage the information provided by counterfactual explanations to build high-fidelity and high-accuracy model extraction attacks.
Our attack enables the adversary to build a faithful copy of a target model by accessing its counterfactual explanations.
arXiv Detail & Related papers (2020-09-03T19:02:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.