This changes to that : Combining causal and non-causal explanations to
generate disease progression in capsule endoscopy
- URL: http://arxiv.org/abs/2212.02506v1
- Date: Mon, 5 Dec 2022 12:46:19 GMT
- Title: This changes to that : Combining causal and non-causal explanations to
generate disease progression in capsule endoscopy
- Authors: Anuja Vats, Ahmed Mohammed, Marius Pedersen, Nirmalie Wiratunga
- Abstract summary: We propose a unified explanation approach that combines both model-dependent and agnostic explanations to produce an explanation set.
The generated explanations are consistent in the neighborhood of a sample and can highlight causal relationships between image content and the outcome.
- Score: 5.287156503763459
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the unequivocal need for understanding the decision processes of deep
learning networks, both modal-dependent and model-agnostic techniques have
become very popular. Although both of these ideas provide transparency for
automated decision making, most methodologies focus on either using the
modal-gradients (model-dependent) or ignoring the model internal states and
reasoning with a model's behavior/outcome (model-agnostic) to instances. In
this work, we propose a unified explanation approach that given an instance
combines both model-dependent and agnostic explanations to produce an
explanation set. The generated explanations are not only consistent in the
neighborhood of a sample but can highlight causal relationships between image
content and the outcome. We use Wireless Capsule Endoscopy (WCE) domain to
illustrate the effectiveness of our explanations. The saliency maps generated
by our approach are comparable or better on the softmax information score.
Related papers
- CNN-based explanation ensembling for dataset, representation and explanations evaluation [1.1060425537315088]
We explore the potential of ensembling explanations generated by deep classification models using convolutional model.
Through experimentation and analysis, we aim to investigate the implications of combining explanations to uncover a more coherent and reliable patterns of the model's behavior.
arXiv Detail & Related papers (2024-04-16T08:39:29Z) - Improving Explainability of Disentangled Representations using
Multipath-Attribution Mappings [12.145748796751619]
We propose a framework that utilizes interpretable disentangled representations for downstream-task prediction.
We demonstrate the effectiveness of our approach on a synthetic benchmark suite and two medical datasets.
arXiv Detail & Related papers (2023-06-15T10:52:29Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - Towards Trustable Skin Cancer Diagnosis via Rewriting Model's Decision [12.306688233127312]
We introduce a human-in-the-loop framework in the model training process.
Our method can automatically discover confounding factors.
It is capable of learning confounding concepts using easily obtained concept exemplars.
arXiv Detail & Related papers (2023-03-02T01:02:18Z) - Causality-Aware Local Interpretable Model-Agnostic Explanations [7.412445894287709]
We propose a novel extension to a widely used local and model-agnostic explainer, which encodes explicit causal relationships within the data surrounding the instance being explained.
Our approach overcomes the original method in terms of faithfully replicating the black-box model's mechanism and the consistency and reliability of the generated explanations.
arXiv Detail & Related papers (2022-12-10T10:12:27Z) - Agree to Disagree: When Deep Learning Models With Identical
Architectures Produce Distinct Explanations [0.0]
We introduce a measure of explanation consistency which we use to highlight the identified problems on the MIMIC-CXR dataset.
We find explanations of identical models but with different training setups have a low consistency: $approx$ 33% on average.
We conclude that current trends in model explanation are not sufficient to mitigate the risks of deploying models in real life healthcare applications.
arXiv Detail & Related papers (2021-05-14T12:16:47Z) - A Twin Neural Model for Uplift [59.38563723706796]
Uplift is a particular case of conditional treatment effect modeling.
We propose a new loss function defined by leveraging a connection with the Bayesian interpretation of the relative risk.
We show our proposed method is competitive with the state-of-the-art in simulation setting and on real data from large scale randomized experiments.
arXiv Detail & Related papers (2021-05-11T16:02:39Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.