LIMEcraft: Handcrafted superpixel selection and inspection for Visual
eXplanations
- URL: http://arxiv.org/abs/2111.08094v1
- Date: Mon, 15 Nov 2021 21:40:34 GMT
- Title: LIMEcraft: Handcrafted superpixel selection and inspection for Visual
eXplanations
- Authors: Weronika Hryniewska, Adrianna Grudzie\'n, Przemys{\l}aw Biecek
- Abstract summary: LIMEcraft allows a user to interactively select semantically consistent areas and thoroughly examine the prediction for the image instance.
Our method improves model safety by inspecting model fairness for image pieces that may indicate model bias.
- Score: 3.0036519884678894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increased interest in deep learning applications, and their
hard-to-detect biases result in the need to validate and explain complex
models. However, current explanation methods are limited as far as both the
explanation of the reasoning process and prediction results are concerned. They
usually only show the location in the image that was important for model
prediction. The lack of possibility to interact with explanations makes it
difficult to verify and understand exactly how the model works. This creates a
significant risk when using the model. It is compounded by the fact that
explanations do not take into account the semantic meaning of the explained
objects. To escape from the trap of static explanations, we propose an approach
called LIMEcraft that allows a user to interactively select semantically
consistent areas and thoroughly examine the prediction for the image instance
in case of many image features. Experiments on several models showed that our
method improves model safety by inspecting model fairness for image pieces that
may indicate model bias. The code is available at:
http://github.com/MI2DataLab/LIMEcraft
Related papers
- OCTET: Object-aware Counterfactual Explanations [29.532969342297086]
We propose an object-centric framework for counterfactual explanation generation.
Our method, inspired by recent generative modeling works, encodes the query image into a latent space that is structured to ease object-level manipulations.
We conduct a set of experiments on counterfactual explanation benchmarks for driving scenes, and we show that our method can be adapted beyond classification.
arXiv Detail & Related papers (2022-11-22T16:23:12Z) - Learning to Scaffold: Optimizing Model Explanations for Teaching [74.25464914078826]
We train models on three natural language processing and computer vision tasks.
We find that students trained with explanations extracted with our framework are able to simulate the teacher significantly more effectively than ones produced with previous methods.
arXiv Detail & Related papers (2022-04-22T16:43:39Z) - SLISEMAP: Explainable Dimensionality Reduction [0.0]
Existing explanation methods for black-box supervised learning models generally work by building local models that explain the models behaviour for a particular data item.
We propose a new manifold visualization method, SLISEMAP, that finds local explanations for all of the data items and builds a two-dimensional visualization of model space.
We show that SLISEMAP provides fast and stable visualizations that can be used to explain and understand black box regression and classification models.
arXiv Detail & Related papers (2022-01-12T13:06:21Z) - Global explainability in aligned image modalities [0.0]
We focus on image modalities that are naturally aligned such that each pixel position represents a similar relative position on the imaged object.
We propose the pixel-wise aggregation of image-wise explanations as a simple method to obtain label-wise and overall global explanations.
We then apply these methods to ultra-widefield retinal images, a naturally aligned modality.
arXiv Detail & Related papers (2021-12-17T16:05:11Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Visualising Deep Network's Time-Series Representations [93.73198973454944]
Despite the popularisation of machine learning models, more often than not they still operate as black boxes with no insight into what is happening inside the model.
In this paper, a method that addresses that issue is proposed, with a focus on visualising multi-dimensional time-series data.
Experiments on a high-frequency stock market dataset show that the method provides fast and discernible visualisations.
arXiv Detail & Related papers (2021-03-12T09:53:34Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - Towards Visually Explaining Similarity Models [29.704524987493766]
We present a method to generate gradient-based visual attention for image similarity predictors.
By relying solely on the learned feature embedding, we show that our approach can be applied to any kind of CNN-based similarity architecture.
We show that our resulting attention maps serve more than just interpretability; they can be infused into the model learning process itself with new trainable constraints.
arXiv Detail & Related papers (2020-08-13T17:47:41Z) - Are Visual Explanations Useful? A Case Study in Model-in-the-Loop
Prediction [49.254162397086006]
We study explanations based on visual saliency in an image-based age prediction task.
We find that presenting model predictions improves human accuracy.
However, explanations of various kinds fail to significantly alter human accuracy or trust in the model.
arXiv Detail & Related papers (2020-07-23T20:39:40Z) - How do Decisions Emerge across Layers in Neural Models? Interpretation
with Differentiable Masking [70.92463223410225]
DiffMask learns to mask-out subsets of the input while maintaining differentiability.
Decision to include or disregard an input token is made with a simple model based on intermediate hidden layers.
This lets us not only plot attribution heatmaps but also analyze how decisions are formed across network layers.
arXiv Detail & Related papers (2020-04-30T17:36:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.