Toward Explainable AI for Regression Models
- URL: http://arxiv.org/abs/2112.11407v1
- Date: Tue, 21 Dec 2021 18:09:42 GMT
- Title: Toward Explainable AI for Regression Models
- Authors: Simon Letzgus, Patrick Wagner, Jonas Lederer, Wojciech Samek,
Klaus-Robert M\"uller, and Gregoire Montavon
- Abstract summary: Explainable AI (XAI) techniques have reached significant popularity for classifiers.
But little attention has been devoted to XAI for regression models (XAIR)
In this review, we clarify the fundamental conceptual differences of XAI for regression and classification tasks, establish novel theoretical insights and analysis for XAIR, and discuss the challenges remaining for the field.
- Score: 9.580887668756692
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In addition to the impressive predictive power of machine learning (ML)
models, more recently, explanation methods have emerged that enable an
interpretation of complex non-linear learning models such as deep neural
networks. Gaining a better understanding is especially important e.g. for
safety-critical ML applications or medical diagnostics etc. While such
Explainable AI (XAI) techniques have reached significant popularity for
classifiers, so far little attention has been devoted to XAI for regression
models (XAIR). In this review, we clarify the fundamental conceptual
differences of XAI for regression and classification tasks, establish novel
theoretical insights and analysis for XAIR, provide demonstrations of XAIR on
genuine practical regression problems, and finally discuss the challenges
remaining for the field.
Related papers
- Introducing δ-XAI: a novel sensitivity-based method for local AI explanations [42.06878765569675]
High-performing AI/ML models often lack interpretability, hampering clinicians' trust in their predictions.
To address this, XAI techniques are being developed to describe AI/ML predictions in human-understandable terms.
Here, we introduce a novel delta-XAI method that provides local explanations of ML model predictions by extending the delta index.
arXiv Detail & Related papers (2024-07-25T19:07:49Z) - Unified Explanations in Machine Learning Models: A Perturbation Approach [0.0]
Inconsistencies between XAI and modeling techniques can have the undesirable effect of casting doubt upon the efficacy of these explainability approaches.
We propose a systematic, perturbation-based analysis against a popular, model-agnostic method in XAI, SHapley Additive exPlanations (Shap)
We devise algorithms to generate relative feature importance in settings of dynamic inference amongst a suite of popular machine learning and deep learning methods, and metrics that allow us to quantify how well explanations generated under the static case hold.
arXiv Detail & Related papers (2024-05-30T16:04:35Z) - XpertAI: uncovering model strategies for sub-manifolds [1.2874569408514918]
In regression, explanations need to be precisely formulated to address specific user queries.
We introduce XpertAI, a framework that disentangles the prediction strategy into multiple range-specific sub-strategies.
arXiv Detail & Related papers (2024-03-12T10:21:31Z) - X Hacking: The Threat of Misguided AutoML [2.3011205420794574]
This paper introduces the concept of X-hacking, a form of p-hacking applied to XAI metrics such as Shap values.
We show how an automated machine learning pipeline can be used to search for 'defensible' models that produce a desired explanation while maintaining superior performance to a common baseline.
arXiv Detail & Related papers (2024-01-16T17:21:33Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Optimizing Explanations by Network Canonization and Hyperparameter
Search [74.76732413972005]
Rule-based and modified backpropagation XAI approaches often face challenges when being applied to modern model architectures.
Model canonization is the process of re-structuring the model to disregard problematic components without changing the underlying function.
In this work, we propose canonizations for currently relevant model blocks applicable to popular deep neural network architectures.
arXiv Detail & Related papers (2022-11-30T17:17:55Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Tree-based local explanations of machine learning model predictions,
AraucanaXAI [2.9660372210786563]
A tradeoff between performance and intelligibility is often to be faced, especially in high-stakes applications like medicine.
We propose a novel methodological approach for generating explanations of the predictions of a generic ML model.
arXiv Detail & Related papers (2021-10-15T17:39:19Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.