On Minimizing the Impact of Dataset Shifts on Actionable Explanations
- URL: http://arxiv.org/abs/2306.06716v1
- Date: Sun, 11 Jun 2023 16:34:19 GMT
- Title: On Minimizing the Impact of Dataset Shifts on Actionable Explanations
- Authors: Anna P. Meyer, Dan Ley, Suraj Srinivas, Himabindu Lakkaraju
- Abstract summary: We conduct rigorous theoretical analysis to demonstrate that model curvature, weight decay parameters while training, and the magnitude of the dataset shift are key factors that determine the extent of explanation (in)stability.
- Score: 14.83940426256441
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Right to Explanation is an important regulatory principle that allows
individuals to request actionable explanations for algorithmic decisions.
However, several technical challenges arise when providing such actionable
explanations in practice. For instance, models are periodically retrained to
handle dataset shifts. This process may invalidate some of the previously
prescribed explanations, thus rendering them unactionable. But, it is unclear
if and when such invalidations occur, and what factors determine explanation
stability i.e., if an explanation remains unchanged amidst model retraining due
to dataset shifts. In this paper, we address the aforementioned gaps and
provide one of the first theoretical and empirical characterizations of the
factors influencing explanation stability. To this end, we conduct rigorous
theoretical analysis to demonstrate that model curvature, weight decay
parameters while training, and the magnitude of the dataset shift are key
factors that determine the extent of explanation (in)stability. Extensive
experimentation with real-world datasets not only validates our theoretical
results, but also demonstrates that the aforementioned factors dramatically
impact the stability of explanations produced by various state-of-the-art
methods.
Related papers
- Identifiability Guarantees for Causal Disentanglement from Purely Observational Data [10.482728002416348]
Causal disentanglement aims to learn about latent causal factors behind data.
Recent advances establish identifiability results assuming that interventions on (single) latent factors are available.
We provide a precise characterization of latent factors that can be identified in nonlinear causal models.
arXiv Detail & Related papers (2024-10-31T04:18:29Z) - Cross-Entropy Is All You Need To Invert the Data Generating Process [29.94396019742267]
Empirical phenomena suggest that supervised models can learn interpretable factors of variation in a linear fashion.
Recent advances in self-supervised learning have shown that these methods can recover latent structures by inverting the data generating process.
We prove that even in standard classification tasks, models learn representations of ground-truth factors of variation up to a linear transformation.
arXiv Detail & Related papers (2024-10-29T09:03:57Z) - Causal Temporal Representation Learning with Nonstationary Sparse Transition [22.6420431022419]
Causal Temporal Representation Learning (Ctrl) methods aim to identify the temporal causal dynamics of complex nonstationary temporal sequences.
This work adopts a sparse transition assumption, aligned with intuitive human understanding, and presents identifiability results from a theoretical perspective.
We introduce a novel framework, Causal Temporal Representation Learning with Nonstationary Sparse Transition (CtrlNS), designed to leverage the constraints on transition sparsity.
arXiv Detail & Related papers (2024-09-05T00:38:27Z) - Sequential Representation Learning via Static-Dynamic Conditional Disentanglement [58.19137637859017]
This paper explores self-supervised disentangled representation learning within sequential data, focusing on separating time-independent and time-varying factors in videos.
We propose a new model that breaks the usual independence assumption between those factors by explicitly accounting for the causal relationship between the static/dynamic variables.
Experiments show that the proposed approach outperforms previous complex state-of-the-art techniques in scenarios where the dynamics of a scene are influenced by its content.
arXiv Detail & Related papers (2024-08-10T17:04:39Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Understanding Disparities in Post Hoc Machine Learning Explanation [2.965442487094603]
Previous work has highlighted that existing post-hoc explanation methods exhibit disparities in explanation fidelity (across 'race' and 'gender' as sensitive attributes)
We specifically assess challenges to explanation disparities that originate from properties of the data.
Results indicate that disparities in model explanations can also depend on data and model properties.
arXiv Detail & Related papers (2024-01-25T22:09:28Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Rethinking Stability for Attribution-based Explanations [20.215505482157255]
We introduce metrics to quantify the stability of an explanation and show that several popular explanation methods are unstable.
In particular, we propose new Relative Stability metrics that measure the change in output explanation with respect to change in input, model representation, or output of the underlying predictor.
arXiv Detail & Related papers (2022-03-14T06:19:27Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.