Towards credible visual model interpretation with path attribution
- URL: http://arxiv.org/abs/2305.14395v1
- Date: Tue, 23 May 2023 06:23:08 GMT
- Title: Towards credible visual model interpretation with path attribution
- Authors: Naveed Akhtar, Muhammad A. A. K. Jalwana
- Abstract summary: path attribution framework stands out among the post-hoc model interpretation tools due to its axiomatic nature.
Recent developments show that this framework can still suffer from counter-intuitive results.
We devise a scheme to preclude the conditions in which visual model interpretation can invalidate the axiomatic properties of path attribution.
- Score: 24.86176236641865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Originally inspired by game-theory, path attribution framework stands out
among the post-hoc model interpretation tools due to its axiomatic nature.
However, recent developments show that this framework can still suffer from
counter-intuitive results. Moreover, specifically for deep visual models, the
existing path-based methods also fall short on conforming to the original
intuitions that are the basis of the claimed axiomatic properties of this
framework. We address these problems with a systematic investigation, and
pinpoint the conditions in which the counter-intuitive results can be avoided
for deep visual model interpretation with the path attribution strategy. We
also devise a scheme to preclude the conditions in which visual model
interpretation can invalidate the axiomatic properties of path attribution.
These insights are combined into a method that enables reliable visual model
interpretation. Our findings are establish empirically with multiple datasets,
models and evaluation metrics. Extensive experiments show a consistent
performance gain of our method over the baselines.
Related papers
- Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - Exploring the Trade-off Between Model Performance and Explanation Plausibility of Text Classifiers Using Human Rationales [3.242050660144211]
Saliency post-hoc explainability methods are important tools for understanding increasingly complex NLP models.
We present a methodology for incorporating rationales, which are text annotations explaining human decisions, into text classification models.
arXiv Detail & Related papers (2024-04-03T22:39:33Z) - Revealing Multimodal Contrastive Representation Learning through Latent
Partial Causal Models [85.67870425656368]
We introduce a unified causal model specifically designed for multimodal data.
We show that multimodal contrastive representation learning excels at identifying latent coupled variables.
Experiments demonstrate the robustness of our findings, even when the assumptions are violated.
arXiv Detail & Related papers (2024-02-09T07:18:06Z) - Hierarchical Bias-Driven Stratification for Interpretable Causal Effect
Estimation [1.6874375111244329]
BICauseTree is an interpretable balancing method that identifies clusters where natural experiments occur locally.
We evaluate the method's performance using synthetic and realistic datasets, explore its bias-interpretability tradeoff, and show that it is comparable with existing approaches.
arXiv Detail & Related papers (2024-01-31T10:58:13Z) - Fixing confirmation bias in feature attribution methods via semantic
match [4.733072355085082]
We argue that a structured approach is required to test whether our hypotheses on the model are confirmed by the feature attributions.
This is what we call the "semantic match" between human concepts and (sub-symbolic) explanations.
arXiv Detail & Related papers (2023-07-03T09:50:08Z) - Latent Traversals in Generative Models as Potential Flows [113.4232528843775]
We propose to model latent structures with a learned dynamic potential landscape.
Inspired by physics, optimal transport, and neuroscience, these potential landscapes are learned as physically realistic partial differential equations.
Our method achieves both more qualitatively and quantitatively disentangled trajectories than state-of-the-art baselines.
arXiv Detail & Related papers (2023-04-25T15:53:45Z) - Planning with Diffusion for Flexible Behavior Synthesis [125.24438991142573]
We consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem.
The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories.
arXiv Detail & Related papers (2022-05-20T07:02:03Z) - Neuro-symbolic Natural Logic with Introspective Revision for Natural
Language Inference [17.636872632724582]
We introduce a neuro-symbolic natural logic framework based on reinforcement learning with introspective revision.
The proposed model has built-in interpretability and shows superior capability in monotonicity inference, systematic generalization, and interpretability.
arXiv Detail & Related papers (2022-03-09T16:31:58Z) - Distributional Depth-Based Estimation of Object Articulation Models [21.046351215949525]
We propose a method that efficiently learns distributions over articulation model parameters directly from depth images.
Our core contributions include a novel representation for distributions over rigid body transformations.
We introduce a novel deep learning based approach, DUST-net, that performs category-independent articulation model estimation.
arXiv Detail & Related papers (2021-08-12T17:44:51Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Evaluating the Disentanglement of Deep Generative Models through
Manifold Topology [66.06153115971732]
We present a method for quantifying disentanglement that only uses the generative model.
We empirically evaluate several state-of-the-art models across multiple datasets.
arXiv Detail & Related papers (2020-06-05T20:54:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.