Interpretable pipelines with evolutionarily optimized modules for RL
tasks with visual inputs
- URL: http://arxiv.org/abs/2202.04943v1
- Date: Thu, 10 Feb 2022 10:33:44 GMT
- Title: Interpretable pipelines with evolutionarily optimized modules for RL
tasks with visual inputs
- Authors: Leonardo Lucio Custode and Giovanni Iacca
- Abstract summary: We propose end-to-end pipelines composed of multiple interpretable models co-optimized by means of evolutionary algorithms.
We test our approach in reinforcement learning environments from the Atari benchmark.
- Score: 5.254093731341154
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The importance of explainability in AI has become a pressing concern, for
which several explainable AI (XAI) approaches have been recently proposed.
However, most of the available XAI techniques are post-hoc methods, which
however may be only partially reliable, as they do not reflect exactly the
state of the original models. Thus, a more direct way for achieving XAI is
through interpretable (also called glass-box) models. These models have been
shown to obtain comparable (and, in some cases, better) performance with
respect to black-boxes models in various tasks such as classification and
reinforcement learning. However, they struggle when working with raw data,
especially when the input dimensionality increases and the raw inputs alone do
not give valuable insights on the decision-making process. Here, we propose to
use end-to-end pipelines composed of multiple interpretable models co-optimized
by means of evolutionary algorithms, that allows us to decompose the
decision-making process into two parts: computing high-level features from raw
data, and reasoning on the extracted high-level features. We test our approach
in reinforcement learning environments from the Atari benchmark, where we
obtain comparable results (with respect to black-box approaches) in settings
without stochastic frame-skipping, while performance degrades in frame-skipping
settings.
Related papers
- From Abstract to Actionable: Pairwise Shapley Values for Explainable AI [0.8192907805418583]
We propose Pairwise Shapley Values, a novel framework that grounds feature attributions in explicit, human-relatable comparisons.
Our method introduces pairwise reference selection combined with single-value imputation to deliver intuitive, model-agnostic explanations.
We demonstrate that Pairwise Shapley Values enhance interpretability across diverse regression and classification scenarios.
arXiv Detail & Related papers (2025-02-18T04:20:18Z) - BRiTE: Bootstrapping Reinforced Thinking Process to Enhance Language Model Reasoning [78.63421517563056]
Large Language Models (LLMs) have demonstrated remarkable capabilities in complex reasoning tasks.
We present a unified probabilistic framework that formalizes LLM reasoning through a novel graphical model.
We introduce the Bootstrapping Reinforced Thinking Process (BRiTE) algorithm, which works in two steps.
arXiv Detail & Related papers (2025-01-31T02:39:07Z) - F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI [15.314388210699443]
Fine-tuned Fidelity F-Fidelity is a robust evaluation framework for XAI.
We show that F-Fidelity significantly improves upon prior evaluation metrics in recovering the ground-truth ranking of explainers.
We also show that given a faithful explainer, F-Fidelity metric can be used to compute the sparsity of influential input components.
arXiv Detail & Related papers (2024-10-03T20:23:06Z) - Explainable AI for Enhancing Efficiency of DL-based Channel Estimation [1.0136215038345013]
Support of artificial intelligence based decision-making is a key element in future 6G networks.
In such applications, using AI as black-box models is risky and challenging.
We propose a novel-based XAI-CHEST framework that is oriented toward channel estimation in wireless communications.
arXiv Detail & Related papers (2024-07-09T16:24:21Z) - Spatio-Temporal Side Tuning Pre-trained Foundation Models for Video-based Pedestrian Attribute Recognition [58.79807861739438]
Existing pedestrian recognition (PAR) algorithms are mainly developed based on a static image.
We propose to understand human attributes using video frames that can fully use temporal information.
arXiv Detail & Related papers (2024-04-27T14:43:32Z) - Robust Learning with Progressive Data Expansion Against Spurious
Correlation [65.83104529677234]
We study the learning process of a two-layer nonlinear convolutional neural network in the presence of spurious features.
Our analysis suggests that imbalanced data groups and easily learnable spurious features can lead to the dominance of spurious features during the learning process.
We propose a new training algorithm called PDE that efficiently enhances the model's robustness for a better worst-group performance.
arXiv Detail & Related papers (2023-06-08T05:44:06Z) - Optimizing Explanations by Network Canonization and Hyperparameter
Search [74.76732413972005]
Rule-based and modified backpropagation XAI approaches often face challenges when being applied to modern model architectures.
Model canonization is the process of re-structuring the model to disregard problematic components without changing the underlying function.
In this work, we propose canonizations for currently relevant model blocks applicable to popular deep neural network architectures.
arXiv Detail & Related papers (2022-11-30T17:17:55Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Evaluating Explainable Artificial Intelligence Methods for Multi-label
Deep Learning Classification Tasks in Remote Sensing [0.0]
We develop deep learning models with state-of-the-art performance in benchmark datasets.
Ten XAI methods were employed towards understanding and interpreting models' predictions.
Occlusion, Grad-CAM and Lime were the most interpretable and reliable XAI methods.
arXiv Detail & Related papers (2021-04-03T11:13:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.