Conditional Generative Models for Counterfactual Explanations
- URL: http://arxiv.org/abs/2101.10123v1
- Date: Mon, 25 Jan 2021 14:31:13 GMT
- Title: Conditional Generative Models for Counterfactual Explanations
- Authors: Arnaud Van Looveren, Janis Klaise, Giovanni Vacanti, Oliver Cobb
- Abstract summary: We propose a general framework to generate sparse, in-distribution counterfactual model explanations.
The framework is flexible with respect to the type of generative model used as well as the task of the underlying predictive model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Counterfactual instances offer human-interpretable insight into the local
behaviour of machine learning models. We propose a general framework to
generate sparse, in-distribution counterfactual model explanations which match
a desired target prediction with a conditional generative model, allowing
batches of counterfactual instances to be generated with a single forward pass.
The method is flexible with respect to the type of generative model used as
well as the task of the underlying predictive model. This allows
straightforward application of the framework to different modalities such as
images, time series or tabular data as well as generative model paradigms such
as GANs or autoencoders and predictive tasks like classification or regression.
We illustrate the effectiveness of our method on image (CelebA), time series
(ECG) and mixed-type tabular (Adult Census) data.
Related papers
- Embedding-based statistical inference on generative models [10.948308354932639]
We extend results related to embedding-based representations of generative models to classical statistical inference settings.
We demonstrate that using the perspective space as the basis of a notion of "similar" is effective for multiple model-level inference tasks.
arXiv Detail & Related papers (2024-10-01T22:28:39Z) - Consistent estimation of generative model representations in the data
kernel perspective space [13.099029073152257]
Generative models, such as large language models and text-to-image diffusion models, produce relevant information when presented a query.
Different models may produce different information when presented the same query.
We present novel theoretical results for embedding-based representations of generative models in the context of a set of queries.
arXiv Detail & Related papers (2024-09-25T19:35:58Z) - Representer Point Selection for Explaining Regularized High-dimensional
Models [105.75758452952357]
We introduce a class of sample-based explanations we term high-dimensional representers.
Our workhorse is a novel representer theorem for general regularized high-dimensional models.
We study the empirical performance of our proposed methods on three real-world binary classification datasets and two recommender system datasets.
arXiv Detail & Related papers (2023-05-31T16:23:58Z) - PAMI: partition input and aggregate outputs for model interpretation [69.42924964776766]
In this study, a simple yet effective visualization framework called PAMI is proposed based on the observation that deep learning models often aggregate features from local regions for model predictions.
The basic idea is to mask majority of the input and use the corresponding model output as the relative contribution of the preserved input part to the original model prediction.
Extensive experiments on multiple tasks confirm the proposed method performs better than existing visualization approaches in more precisely finding class-specific input regions.
arXiv Detail & Related papers (2023-02-07T08:48:34Z) - DiffusER: Discrete Diffusion via Edit-based Reconstruction [88.62707047517914]
DiffusER is an edit-based generative model for text based on denoising diffusion models.
It can rival autoregressive models on several tasks spanning machine translation, summarization, and style transfer.
It can also perform other varieties of generation that standard autoregressive models are not well-suited for.
arXiv Detail & Related papers (2022-10-30T16:55:23Z) - Learning Consistent Deep Generative Models from Sparse Data via
Prediction Constraints [16.48824312904122]
We develop a new framework for learning variational autoencoders and other deep generative models.
We show that these two contributions -- prediction constraints and consistency constraints -- lead to promising image classification performance.
arXiv Detail & Related papers (2020-12-12T04:18:50Z) - Predictive process mining by network of classifiers and clusterers: the
PEDF model [0.0]
The PEDF model learns based on events' sequences, durations, and extra features.
The model requires to extract two sets of data from log files.
arXiv Detail & Related papers (2020-11-22T23:27:19Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Evaluating the Disentanglement of Deep Generative Models through
Manifold Topology [66.06153115971732]
We present a method for quantifying disentanglement that only uses the generative model.
We empirically evaluate several state-of-the-art models across multiple datasets.
arXiv Detail & Related papers (2020-06-05T20:54:11Z) - Pattern Similarity-based Machine Learning Methods for Mid-term Load
Forecasting: A Comparative Study [0.0]
We use pattern similarity-based methods for forecasting monthly electricity demand expressing annual seasonality.
An integral part of the models is the time series representation using patterns of time series sequences.
We consider four such models: nearest neighbor model, fuzzy neighborhood model, kernel regression model and general regression neural network.
arXiv Detail & Related papers (2020-03-03T12:14:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.