A Generic and Model-Agnostic Exemplar Synthetization Framework for
Explainable AI
- URL: http://arxiv.org/abs/2006.03896v3
- Date: Tue, 4 Aug 2020 17:05:45 GMT
- Title: A Generic and Model-Agnostic Exemplar Synthetization Framework for
Explainable AI
- Authors: Antonio Barbalau, Adrian Cosma, Radu Tudor Ionescu and Marius Popescu
- Abstract summary: We focus on explainable AI and propose a novel generic and model-agnostic framework for synthesizing input exemplars.
We use a generative model, which acts as a prior for generating data, and traverse its latent space using a novel evolutionary strategy.
Our framework is model-agnostic, in the sense that the machine learning model that we aim to explain is a black-box.
- Score: 29.243901669124515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the growing complexity of deep learning methods adopted in practical
applications, there is an increasing and stringent need to explain and
interpret the decisions of such methods. In this work, we focus on explainable
AI and propose a novel generic and model-agnostic framework for synthesizing
input exemplars that maximize a desired response from a machine learning model.
To this end, we use a generative model, which acts as a prior for generating
data, and traverse its latent space using a novel evolutionary strategy with
momentum updates. Our framework is generic because (i) it can employ any
underlying generator, e.g. Variational Auto-Encoders (VAEs) or Generative
Adversarial Networks (GANs), and (ii) it can be applied to any input data, e.g.
images, text samples or tabular data. Since we use a zero-order optimization
method, our framework is model-agnostic, in the sense that the machine learning
model that we aim to explain is a black-box. We stress out that our novel
framework does not require access or knowledge of the internal structure or the
training data of the black-box model. We conduct experiments with two
generative models, VAEs and GANs, and synthesize exemplars for various data
formats, image, text and tabular, demonstrating that our framework is generic.
We also employ our prototype synthetization framework on various black-box
models, for which we only know the input and the output formats, showing that
it is model-agnostic. Moreover, we compare our framework (available at
https://github.com/antoniobarbalau/exemplar) with a model-dependent approach
based on gradient descent, proving that our framework obtains equally-good
exemplars in a shorter computational time.
Related papers
- Learning to Walk from Three Minutes of Real-World Data with Semi-structured Dynamics Models [9.318262213262866]
We introduce a novel framework for learning semi-structured dynamics models for contact-rich systems.
We make accurate long-horizon predictions with substantially less data than prior methods.
We validate our approach on a real-world Unitree Go1 quadruped robot.
arXiv Detail & Related papers (2024-10-11T18:11:21Z) - Promises and Pitfalls of Generative Masked Language Modeling: Theoretical Framework and Practical Guidelines [74.42485647685272]
We focus on Generative Masked Language Models (GMLMs)
We train a model to fit conditional probabilities of the data distribution via masking, which are subsequently used as inputs to a Markov Chain to draw samples from the model.
We adapt the T5 model for iteratively-refined parallel decoding, achieving 2-3x speedup in machine translation with minimal sacrifice in quality.
arXiv Detail & Related papers (2024-07-22T18:00:00Z) - Knowledge Fusion By Evolving Weights of Language Models [5.354527640064584]
This paper examines the approach of integrating multiple models into a unified model.
We propose a knowledge fusion method named Evolver, inspired by evolutionary algorithms.
arXiv Detail & Related papers (2024-06-18T02:12:34Z) - Machine Unlearning for Image-to-Image Generative Models [18.952634119351465]
This paper provides a unifying framework for machine unlearning for image-to-image generative models.
We propose a computationally-efficient algorithm, underpinned by rigorous theoretical analysis, that demonstrates negligible performance degradation on the retain samples.
Empirical studies on two large-scale datasets, ImageNet-1K and Places-365, further show that our algorithm does not rely on the availability of the retain samples.
arXiv Detail & Related papers (2024-02-01T05:35:25Z) - Sampling - Variational Auto Encoder - Ensemble: In the Quest of
Explainable Artificial Intelligence [0.0]
This paper contributes to the discourse on XAI by presenting an empirical evaluation based on a novel framework.
It is a hybrid architecture where VAE combined with ensemble stacking and SHapley Additive exPlanations are used for imbalanced classification.
The finding reveals that combining ensemble stacking, VAE, and SHAP can. not only lead to better model performance but also provide an easily explainable framework.
arXiv Detail & Related papers (2023-09-25T02:46:19Z) - TSGM: A Flexible Framework for Generative Modeling of Synthetic Time Series [61.436361263605114]
Time series data are often scarce or highly sensitive, which precludes the sharing of data between researchers and industrial organizations.
We introduce Time Series Generative Modeling (TSGM), an open-source framework for the generative modeling of synthetic time series.
arXiv Detail & Related papers (2023-05-19T10:11:21Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Re-parameterizing Your Optimizers rather than Architectures [119.08740698936633]
We propose a novel paradigm of incorporating model-specific prior knowledge into Structurals and using them to train generic (simple) models.
As an implementation, we propose a novel methodology to add prior knowledge by modifying the gradients according to a set of model-specific hyper- parameters.
For a simple model trained with a Repr, we focus on a VGG-style plain model and showcase that such a simple model trained with a Repr, which is referred to as Rep-VGG, performs on par with the recent well-designed models.
arXiv Detail & Related papers (2022-05-30T16:55:59Z) - InvGAN: Invertible GANs [88.58338626299837]
InvGAN, short for Invertible GAN, successfully embeds real images to the latent space of a high quality generative model.
This allows us to perform image inpainting, merging, and online data augmentation.
arXiv Detail & Related papers (2021-12-08T21:39:00Z) - Design of Dynamic Experiments for Black-Box Model Discrimination [72.2414939419588]
Consider a dynamic model discrimination setting where we wish to chose: (i) what is the best mechanistic, time-varying model and (ii) what are the best model parameter estimates.
For rival mechanistic models where we have access to gradient information, we extend existing methods to incorporate a wider range of problem uncertainty.
We replace these black-box models with Gaussian process surrogate models and thereby extend the model discrimination setting to additionally incorporate rival black-box model.
arXiv Detail & Related papers (2021-02-07T11:34:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.