Quality Diversity Evolutionary Learning of Decision Trees
- URL: http://arxiv.org/abs/2208.12758v1
- Date: Wed, 17 Aug 2022 13:57:32 GMT
- Title: Quality Diversity Evolutionary Learning of Decision Trees
- Authors: Andrea Ferigo, Leonardo Lucio Custode and Giovanni Iacca
- Abstract summary: We show that MAP-Elites can diversify hybrid models over a feature space that captures both the model complexity and its behavioral variability.
We apply our method on two well-known control problems from the OpenAI Gym library, on which we discuss the "illumination" patterns projected by MAP-Elites.
- Score: 4.447467536572625
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Addressing the need for explainable Machine Learning has emerged as one of
the most important research directions in modern Artificial Intelligence (AI).
While the current dominant paradigm in the field is based on black-box models,
typically in the form of (deep) neural networks, these models lack direct
interpretability for human users, i.e., their outcomes (and, even more so,
their inner working) are opaque and hard to understand. This is hindering the
adoption of AI in safety-critical applications, where high interests are at
stake. In these applications, explainable by design models, such as decision
trees, may be more suitable, as they provide interpretability. Recent works
have proposed the hybridization of decision trees and Reinforcement Learning,
to combine the advantages of the two approaches. So far, however, these works
have focused on the optimization of those hybrid models. Here, we apply
MAP-Elites for diversifying hybrid models over a feature space that captures
both the model complexity and its behavioral variability. We apply our method
on two well-known control problems from the OpenAI Gym library, on which we
discuss the "illumination" patterns projected by MAP-Elites, comparing its
results against existing similar approaches.
Related papers
- SynthTree: Co-supervised Local Model Synthesis for Explainable Prediction [15.832975722301011]
We propose a novel method to enhance explainability with minimal accuracy loss.
We have developed novel methods for estimating nodes by leveraging AI techniques.
Our findings highlight the critical role that statistical methodologies can play in advancing explainable AI.
arXiv Detail & Related papers (2024-06-16T14:43:01Z) - Efficient Adaptation in Mixed-Motive Environments via Hierarchical Opponent Modeling and Planning [51.52387511006586]
We propose Hierarchical Opponent modeling and Planning (HOP), a novel multi-agent decision-making algorithm.
HOP is hierarchically composed of two modules: an opponent modeling module that infers others' goals and learns corresponding goal-conditioned policies.
HOP exhibits superior few-shot adaptation capabilities when interacting with various unseen agents, and excels in self-play scenarios.
arXiv Detail & Related papers (2024-06-12T08:48:06Z) - Unified Explanations in Machine Learning Models: A Perturbation Approach [0.0]
Inconsistencies between XAI and modeling techniques can have the undesirable effect of casting doubt upon the efficacy of these explainability approaches.
We propose a systematic, perturbation-based analysis against a popular, model-agnostic method in XAI, SHapley Additive exPlanations (Shap)
We devise algorithms to generate relative feature importance in settings of dynamic inference amongst a suite of popular machine learning and deep learning methods, and metrics that allow us to quantify how well explanations generated under the static case hold.
arXiv Detail & Related papers (2024-05-30T16:04:35Z) - Solving the enigma: Deriving optimal explanations of deep networks [3.9584068556746246]
We propose a novel framework designed to enhance the explainability of deep networks.
Our framework integrates various explanations from established XAI methods and employs a non-explanation to construct an optimal explanation.
Our results suggest that optimal explanations based on specific criteria are derivable.
arXiv Detail & Related papers (2024-05-16T11:49:08Z) - A model-agnostic approach for generating Saliency Maps to explain
inferred decisions of Deep Learning Models [2.741266294612776]
We propose a model-agnostic method for generating saliency maps that has access only to the output of the model.
We use Differential Evolution to identify which image pixels are the most influential in a model's decision-making process.
arXiv Detail & Related papers (2022-09-19T10:28:37Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Deep Learning Reproducibility and Explainable AI (XAI) [9.13755431537592]
The nondeterminism of Deep Learning (DL) training algorithms and its influence on the explainability of neural network (NN) models are investigated.
To discuss the issue, two convolutional neural networks (CNN) have been trained and their results compared.
arXiv Detail & Related papers (2022-02-23T12:06:20Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - UnitedQA: A Hybrid Approach for Open Domain Question Answering [70.54286377610953]
We apply novel techniques to enhance both extractive and generative readers built upon recent pretrained neural language models.
Our approach outperforms previous state-of-the-art models by 3.3 and 2.7 points in exact match on NaturalQuestions and TriviaQA respectively.
arXiv Detail & Related papers (2021-01-01T06:36:16Z) - Efficient Model-Based Reinforcement Learning through Optimistic Policy
Search and Planning [93.1435980666675]
We show how optimistic exploration can be easily combined with state-of-the-art reinforcement learning algorithms.
Our experiments demonstrate that optimistic exploration significantly speeds-up learning when there are penalties on actions.
arXiv Detail & Related papers (2020-06-15T18:37:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.