TDLS: A Top-Down Layer Searching Algorithm for Generating Counterfactual
Visual Explanation
- URL: http://arxiv.org/abs/2108.04238v1
- Date: Sun, 8 Aug 2021 15:27:14 GMT
- Title: TDLS: A Top-Down Layer Searching Algorithm for Generating Counterfactual
Visual Explanation
- Authors: Cong Wang, Haocheng Han and Caleb Chen Cao
- Abstract summary: We adapt counterfactual explanation over fine-grained image classification problem.
We have proved that our TDLS algorithm could provide more flexible counterfactual visual explanation.
At the end, we discussed several applicable scenarios of counterfactual visual explanations.
- Score: 4.4553061479339995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explanation of AI, as well as fairness of algorithms' decisions and the
transparency of the decision model, are becoming more and more important. And
it is crucial to design effective and human-friendly techniques when opening
the black-box model. Counterfactual conforms to the human way of thinking and
provides a human-friendly explanation, and its corresponding explanation
algorithm refers to a strategic alternation of a given data point so that its
model output is "counter-facted", i.e. the prediction is reverted. In this
paper, we adapt counterfactual explanation over fine-grained image
classification problem. We demonstrated an adaptive method that could give a
counterfactual explanation by showing the composed counterfactual feature map
using top-down layer searching algorithm (TDLS). We have proved that our TDLS
algorithm could provide more flexible counterfactual visual explanation in an
efficient way using VGG-16 model on Caltech-UCSD Birds 200 dataset. At the end,
we discussed several applicable scenarios of counterfactual visual
explanations.
Related papers
- From Wrong To Right: A Recursive Approach Towards Vision-Language
Explanation [60.746079839840895]
We present ReVisE: a $textbfRe$cursive $textbfVis$ual $textbfE$xplanation algorithm.
Our method iteratively computes visual features (conditioned on the text input), an answer, and an explanation.
We find that this multi-step approach guides the model to correct its own answers and outperforms single-step explanation generation.
arXiv Detail & Related papers (2023-11-21T07:02:32Z) - Transparent Anomaly Detection via Concept-based Explanations [4.3900160011634055]
We propose Transparent Anomaly Detection Concept Explanations (ACE) for anomaly detection.
ACE provides human interpretable explanations in the form of concepts along with anomaly prediction.
Our proposed model shows either higher or comparable results to black-box uninterpretable models.
arXiv Detail & Related papers (2023-10-16T11:46:26Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - OCTET: Object-aware Counterfactual Explanations [29.532969342297086]
We propose an object-centric framework for counterfactual explanation generation.
Our method, inspired by recent generative modeling works, encodes the query image into a latent space that is structured to ease object-level manipulations.
We conduct a set of experiments on counterfactual explanation benchmarks for driving scenes, and we show that our method can be adapted beyond classification.
arXiv Detail & Related papers (2022-11-22T16:23:12Z) - The Manifold Hypothesis for Gradient-Based Explanations [55.01671263121624]
gradient-based explanation algorithms provide perceptually-aligned explanations.
We show that the more a feature attribution is aligned with the tangent space of the data, the more perceptually-aligned it tends to be.
We suggest that explanation algorithms should actively strive to align their explanations with the data manifold.
arXiv Detail & Related papers (2022-06-15T08:49:24Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Reinforcement Explanation Learning [4.852320309766702]
Black-box methods to generate saliency maps are particularly interesting due to the fact that they do not utilize the internals of the model to explain the decision.
We formulate saliency map generation as a sequential search problem and leverage upon Reinforcement Learning (RL) to accumulate evidence from input images.
Experiments on three benchmark datasets demonstrate the superiority of the proposed approach in inference time over state-of-the-arts without hurting the performance.
arXiv Detail & Related papers (2021-11-26T10:20:01Z) - A Meta-Learning Approach for Training Explainable Graph Neural Networks [10.11960004698409]
We propose a meta-learning framework for improving the level of explainability of a GNN directly at training time.
Our framework jointly trains a model to solve the original task, e.g., node classification, and to provide easily processable outputs for downstream algorithms.
Our model-agnostic approach can improve the explanations produced for different GNN architectures and use any instance-based explainer to drive this process.
arXiv Detail & Related papers (2021-09-20T11:09:10Z) - Causality-based Counterfactual Explanation for Classification Models [11.108866104714627]
We propose a prototype-based counterfactual explanation framework (ProCE)
ProCE is capable of preserving the causal relationship underlying the features of the counterfactual data.
In addition, we design a novel gradient-free optimization based on the multi-objective genetic algorithm that generates the counterfactual explanations.
arXiv Detail & Related papers (2021-05-03T09:25:59Z) - Evaluating Explainable AI: Which Algorithmic Explanations Help Users
Predict Model Behavior? [97.77183117452235]
We carry out human subject tests to isolate the effect of algorithmic explanations on model interpretability.
Clear evidence of method effectiveness is found in very few cases.
Our results provide the first reliable and comprehensive estimates of how explanations influence simulatability.
arXiv Detail & Related papers (2020-05-04T20:35:17Z) - The data-driven physical-based equations discovery using evolutionary
approach [77.34726150561087]
We describe the algorithm for the mathematical equations discovery from the given observations data.
The algorithm combines genetic programming with the sparse regression.
It could be used for governing analytical equation discovery as well as for partial differential equations (PDE) discovery.
arXiv Detail & Related papers (2020-04-03T17:21:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.