A Generic and Model-Agnostic Exemplar Synthetization Framework for
Explainable AI
- URL: http://arxiv.org/abs/2006.03896v3
- Date: Tue, 4 Aug 2020 17:05:45 GMT
- Title: A Generic and Model-Agnostic Exemplar Synthetization Framework for
Explainable AI
- Authors: Antonio Barbalau, Adrian Cosma, Radu Tudor Ionescu and Marius Popescu
- Abstract summary: We focus on explainable AI and propose a novel generic and model-agnostic framework for synthesizing input exemplars.
We use a generative model, which acts as a prior for generating data, and traverse its latent space using a novel evolutionary strategy.
Our framework is model-agnostic, in the sense that the machine learning model that we aim to explain is a black-box.
- Score: 29.243901669124515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the growing complexity of deep learning methods adopted in practical
applications, there is an increasing and stringent need to explain and
interpret the decisions of such methods. In this work, we focus on explainable
AI and propose a novel generic and model-agnostic framework for synthesizing
input exemplars that maximize a desired response from a machine learning model.
To this end, we use a generative model, which acts as a prior for generating
data, and traverse its latent space using a novel evolutionary strategy with
momentum updates. Our framework is generic because (i) it can employ any
underlying generator, e.g. Variational Auto-Encoders (VAEs) or Generative
Adversarial Networks (GANs), and (ii) it can be applied to any input data, e.g.
images, text samples or tabular data. Since we use a zero-order optimization
method, our framework is model-agnostic, in the sense that the machine learning
model that we aim to explain is a black-box. We stress out that our novel
framework does not require access or knowledge of the internal structure or the
training data of the black-box model. We conduct experiments with two
generative models, VAEs and GANs, and synthesize exemplars for various data
formats, image, text and tabular, demonstrating that our framework is generic.
We also employ our prototype synthetization framework on various black-box
models, for which we only know the input and the output formats, showing that
it is model-agnostic. Moreover, we compare our framework (available at
https://github.com/antoniobarbalau/exemplar) with a model-dependent approach
based on gradient descent, proving that our framework obtains equally-good
exemplars in a shorter computational time.
Related papers
- Knowledge Fusion By Evolving Weights of Language Models [5.354527640064584]
This paper examines the approach of integrating multiple models into a unified model.
We propose a knowledge fusion method named Evolver, inspired by evolutionary algorithms.
arXiv Detail & Related papers (2024-06-18T02:12:34Z) - Machine Unlearning for Image-to-Image Generative Models [18.952634119351465]
This paper provides a unifying framework for machine unlearning for image-to-image generative models.
We propose a computationally-efficient algorithm, underpinned by rigorous theoretical analysis, that demonstrates negligible performance degradation on the retain samples.
Empirical studies on two large-scale datasets, ImageNet-1K and Places-365, further show that our algorithm does not rely on the availability of the retain samples.
arXiv Detail & Related papers (2024-02-01T05:35:25Z) - Meaning Representations from Trajectories in Autoregressive Models [106.63181745054571]
We propose to extract meaning representations from autoregressive language models by considering the distribution of all possible trajectories extending an input text.
This strategy is prompt-free, does not require fine-tuning, and is applicable to any pre-trained autoregressive model.
We empirically show that the representations obtained from large models align well with human annotations, outperform other zero-shot and prompt-free methods on semantic similarity tasks, and can be used to solve more complex entailment and containment tasks that standard embeddings cannot handle.
arXiv Detail & Related papers (2023-10-23T04:35:58Z) - Sampling - Variational Auto Encoder - Ensemble: In the Quest of
Explainable Artificial Intelligence [0.0]
This paper contributes to the discourse on XAI by presenting an empirical evaluation based on a novel framework.
It is a hybrid architecture where VAE combined with ensemble stacking and SHapley Additive exPlanations are used for imbalanced classification.
The finding reveals that combining ensemble stacking, VAE, and SHAP can. not only lead to better model performance but also provide an easily explainable framework.
arXiv Detail & Related papers (2023-09-25T02:46:19Z) - TSGM: A Flexible Framework for Generative Modeling of Synthetic Time Series [61.436361263605114]
Time series data are often scarce or highly sensitive, which precludes the sharing of data between researchers and industrial organizations.
We introduce Time Series Generative Modeling (TSGM), an open-source framework for the generative modeling of synthetic time series.
arXiv Detail & Related papers (2023-05-19T10:11:21Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Re-parameterizing Your Optimizers rather than Architectures [119.08740698936633]
We propose a novel paradigm of incorporating model-specific prior knowledge into Structurals and using them to train generic (simple) models.
As an implementation, we propose a novel methodology to add prior knowledge by modifying the gradients according to a set of model-specific hyper- parameters.
For a simple model trained with a Repr, we focus on a VGG-style plain model and showcase that such a simple model trained with a Repr, which is referred to as Rep-VGG, performs on par with the recent well-designed models.
arXiv Detail & Related papers (2022-05-30T16:55:59Z) - InvGAN: Invertible GANs [88.58338626299837]
InvGAN, short for Invertible GAN, successfully embeds real images to the latent space of a high quality generative model.
This allows us to perform image inpainting, merging, and online data augmentation.
arXiv Detail & Related papers (2021-12-08T21:39:00Z) - Design of Dynamic Experiments for Black-Box Model Discrimination [72.2414939419588]
Consider a dynamic model discrimination setting where we wish to chose: (i) what is the best mechanistic, time-varying model and (ii) what are the best model parameter estimates.
For rival mechanistic models where we have access to gradient information, we extend existing methods to incorporate a wider range of problem uncertainty.
We replace these black-box models with Gaussian process surrogate models and thereby extend the model discrimination setting to additionally incorporate rival black-box model.
arXiv Detail & Related papers (2021-02-07T11:34:39Z) - Conditional Generative Models for Counterfactual Explanations [0.0]
We propose a general framework to generate sparse, in-distribution counterfactual model explanations.
The framework is flexible with respect to the type of generative model used as well as the task of the underlying predictive model.
arXiv Detail & Related papers (2021-01-25T14:31:13Z) - LIMEADE: From AI Explanations to Advice Taking [34.581205516506614]
We introduce LIMEADE, the first framework that translates both positive and negative advice into an update to an arbitrary, underlying opaque model.
We show our method improves accuracy compared to a rigorous baseline on the image classification domains.
For the text modality, we apply our framework to a neural recommender system for scientific papers on a public website.
arXiv Detail & Related papers (2020-03-09T18:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.