Sampling Based On Natural Image Statistics Improves Local Surrogate
Explainers
- URL: http://arxiv.org/abs/2208.03961v1
- Date: Mon, 8 Aug 2022 08:10:13 GMT
- Title: Sampling Based On Natural Image Statistics Improves Local Surrogate
Explainers
- Authors: Ricardo Kleinlein, Alexander Hepburn, Ra\'ul Santos-Rodr\'iguez and
Fernando Fern\'andez-Mart\'inez
- Abstract summary: Surrogate explainers are a popular post-hoc interpretability method to further understand how a model arrives at a prediction.
We propose two approaches to do so, namely (1) altering the method for sampling the local neighbourhood and (2) using perceptual metrics to convey some of the properties of the distribution of natural images.
- Score: 111.31448606885672
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many problems in computer vision have recently been tackled using models
whose predictions cannot be easily interpreted, most commonly deep neural
networks. Surrogate explainers are a popular post-hoc interpretability method
to further understand how a model arrives at a particular prediction. By
training a simple, more interpretable model to locally approximate the decision
boundary of a non-interpretable system, we can estimate the relative importance
of the input features on the prediction. Focusing on images, surrogate
explainers, e.g., LIME, generate a local neighbourhood around a query image by
sampling in an interpretable domain. However, these interpretable domains have
traditionally been derived exclusively from the intrinsic features of the query
image, not taking into consideration the manifold of the data the
non-interpretable model has been exposed to in training (or more generally, the
manifold of real images). This leads to suboptimal surrogates trained on
potentially low probability images. We address this limitation by aligning the
local neighbourhood on which the surrogate is trained with the original
training data distribution, even when this distribution is not accessible. We
propose two approaches to do so, namely (1) altering the method for sampling
the local neighbourhood and (2) using perceptual metrics to convey some of the
properties of the distribution of natural images.
Related papers
- Decoding Diffusion: A Scalable Framework for Unsupervised Analysis of Latent Space Biases and Representations Using Natural Language Prompts [68.48103545146127]
This paper proposes a novel framework for unsupervised exploration of diffusion latent spaces.
We directly leverage natural language prompts and image captions to map latent directions.
Our method provides a more scalable and interpretable understanding of the semantic knowledge encoded within diffusion models.
arXiv Detail & Related papers (2024-10-25T21:44:51Z) - Fast constrained sampling in pre-trained diffusion models [77.21486516041391]
Diffusion models have dominated the field of large, generative image models.
We propose an algorithm for fast-constrained sampling in large pre-trained diffusion models.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - Mitigating Bias Using Model-Agnostic Data Attribution [2.9868610316099335]
Mitigating bias in machine learning models is a critical endeavor for ensuring fairness and equity.
We propose a novel approach to address bias by leveraging pixel image attributions to identify and regularize regions of images containing bias attributes.
arXiv Detail & Related papers (2024-05-08T13:00:56Z) - Exploiting Diffusion Prior for Generalizable Dense Prediction [85.4563592053464]
Recent advanced Text-to-Image (T2I) diffusion models are sometimes too imaginative for existing off-the-shelf dense predictors to estimate.
We introduce DMP, a pipeline utilizing pre-trained T2I models as a prior for dense prediction tasks.
Despite limited-domain training data, the approach yields faithful estimations for arbitrary images, surpassing existing state-of-the-art algorithms.
arXiv Detail & Related papers (2023-11-30T18:59:44Z) - Convolutional Cross-View Pose Estimation [9.599356978682108]
We propose a novel end-to-end method for cross-view pose estimation.
Our method is validated on the VIGOR and KITTI datasets.
On the Oxford RobotCar dataset, our method can reliably estimate the ego-vehicle's pose over time.
arXiv Detail & Related papers (2023-03-09T13:52:28Z) - CRADL: Contrastive Representations for Unsupervised Anomaly Detection
and Localization [2.8659934481869715]
Unsupervised anomaly detection in medical imaging aims to detect and localize arbitrary anomalies without requiring anomalous data during training.
Most current state-of-the-art methods use latent variable generative models operating directly on the images.
We propose CRADL whose core idea is to model the distribution of normal samples directly in the low-dimensional representation space of an encoder trained with a contrastive pretext-task.
arXiv Detail & Related papers (2023-01-05T16:07:49Z) - Visual Recognition with Deep Learning from Biased Image Datasets [6.10183951877597]
We show how biasing models can be applied to remedy problems in the context of visual recognition.
Based on the (approximate) knowledge of the biasing mechanisms at work, our approach consists in reweighting the observations.
We propose to use a low dimensional image representation, shared across the image databases.
arXiv Detail & Related papers (2021-09-06T10:56:58Z) - Explainers in the Wild: Making Surrogate Explainers Robust to
Distortions through Perception [77.34726150561087]
We propose a methodology to evaluate the effect of distortions in explanations by embedding perceptual distances.
We generate explanations for images in the Imagenet-C dataset and demonstrate how using a perceptual distances in the surrogate explainer creates more coherent explanations for the distorted and reference images.
arXiv Detail & Related papers (2021-02-22T12:38:53Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Improving Explainability of Image Classification in Scenarios with Class
Overlap: Application to COVID-19 and Pneumonia [7.372797734096181]
Trust in predictions made by machine learning models is increased if the model generalizes well on previously unseen samples.
We propose a method that enhances the explainability of image classifications through better localization by mitigating the model uncertainty induced by class overlap.
Our method is particularly promising in real-world class overlap scenarios, such as COVID-19 and pneumonia, where expertly labeled data for localization is not readily available.
arXiv Detail & Related papers (2020-08-06T20:47:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.