A model-agnostic approach for generating Saliency Maps to explain
inferred decisions of Deep Learning Models
- URL: http://arxiv.org/abs/2209.08906v1
- Date: Mon, 19 Sep 2022 10:28:37 GMT
- Title: A model-agnostic approach for generating Saliency Maps to explain
inferred decisions of Deep Learning Models
- Authors: Savvas Karatsiolis, Andreas Kamilaris
- Abstract summary: We propose a model-agnostic method for generating saliency maps that has access only to the output of the model.
We use Differential Evolution to identify which image pixels are the most influential in a model's decision-making process.
- Score: 2.741266294612776
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The widespread use of black-box AI models has raised the need for algorithms
and methods that explain the decisions made by these models. In recent years,
the AI research community is increasingly interested in models' explainability
since black-box models take over more and more complicated and challenging
tasks. Explainability becomes critical considering the dominance of deep
learning techniques for a wide range of applications, including but not limited
to computer vision. In the direction of understanding the inference process of
deep learning models, many methods that provide human comprehensible evidence
for the decisions of AI models have been developed, with the vast majority
relying their operation on having access to the internal architecture and
parameters of these models (e.g., the weights of neural networks). We propose a
model-agnostic method for generating saliency maps that has access only to the
output of the model and does not require additional information such as
gradients. We use Differential Evolution (DE) to identify which image pixels
are the most influential in a model's decision-making process and produce class
activation maps (CAMs) whose quality is comparable to the quality of CAMs
created with model-specific algorithms. DE-CAM achieves good performance
without requiring access to the internal details of the model's architecture at
the cost of more computational complexity.
Related papers
- Learning-based Models for Vulnerability Detection: An Extensive Study [3.1317409221921144]
We extensively and comprehensively investigate two types of state-of-the-art learning-based approaches.
We experimentally demonstrate the priority of sequence-based models and the limited abilities of both graph-based models.
arXiv Detail & Related papers (2024-08-14T13:01:30Z) - Scaling Vision-Language Models with Sparse Mixture of Experts [128.0882767889029]
We show that mixture-of-experts (MoE) techniques can achieve state-of-the-art performance on a range of benchmarks over dense models of equivalent computational cost.
Our research offers valuable insights into stabilizing the training of MoE models, understanding the impact of MoE on model interpretability, and balancing the trade-offs between compute performance when scaling vision-language models.
arXiv Detail & Related papers (2023-03-13T16:00:31Z) - Foundation models in brief: A historical, socio-technical focus [2.5991265608180396]
Foundation models can be disruptive for future AI development by scaling up deep learning.
Models achieve state-of-the-art performance on a variety of tasks in domains such as natural language processing and computer vision.
arXiv Detail & Related papers (2022-12-17T22:11:33Z) - Interpreting Black-box Machine Learning Models for High Dimensional
Datasets [40.09157165704895]
We train a black-box model on a high-dimensional dataset to learn the embeddings on which the classification is performed.
We then approximate the behavior of the black-box model by means of an interpretable surrogate model on the top-k feature space.
Our approach outperforms state-of-the-art methods like TabNet and XGboost when tested on different datasets.
arXiv Detail & Related papers (2022-08-29T07:36:17Z) - Quality Diversity Evolutionary Learning of Decision Trees [4.447467536572625]
We show that MAP-Elites can diversify hybrid models over a feature space that captures both the model complexity and its behavioral variability.
We apply our method on two well-known control problems from the OpenAI Gym library, on which we discuss the "illumination" patterns projected by MAP-Elites.
arXiv Detail & Related papers (2022-08-17T13:57:32Z) - Model-Based Deep Learning: On the Intersection of Deep Learning and
Optimization [101.32332941117271]
Decision making algorithms are used in a multitude of different applications.
Deep learning approaches that use highly parametric architectures tuned from data without relying on mathematical models are becoming increasingly popular.
Model-based optimization and data-centric deep learning are often considered to be distinct disciplines.
arXiv Detail & Related papers (2022-05-05T13:40:08Z) - Towards Interpretable Deep Reinforcement Learning Models via Inverse
Reinforcement Learning [27.841725567976315]
We propose a novel framework utilizing Adversarial Inverse Reinforcement Learning.
This framework provides global explanations for decisions made by a Reinforcement Learning model.
We capture intuitive tendencies that the model follows by summarizing the model's decision-making process.
arXiv Detail & Related papers (2022-03-30T17:01:59Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Model Complexity of Deep Learning: A Survey [79.20117679251766]
We conduct a systematic overview of the latest studies on model complexity in deep learning.
We review the existing studies on those two categories along four important factors, including model framework, model size, optimization process and data complexity.
arXiv Detail & Related papers (2021-03-08T22:39:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.