"Even if ..." -- Diverse Semifactual Explanations of Reject
- URL: http://arxiv.org/abs/2207.01898v1
- Date: Tue, 5 Jul 2022 08:53:08 GMT
- Title: "Even if ..." -- Diverse Semifactual Explanations of Reject
- Authors: Andr\'e Artelt, Barbara Hammer
- Abstract summary: We propose a conceptual modeling of semifactual explanations for arbitrary reject options.
We empirically evaluate a specific implementation on a conformal prediction based reject option.
- Score: 8.132423340684568
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning based decision making systems applied in safety critical
areas require reliable high certainty predictions. For this purpose, the system
can be extended by an reject option which allows the system to reject inputs
where only a prediction with an unacceptably low certainty would be possible.
While being able to reject uncertain samples is important, it is also of
importance to be able to explain why a particular sample was rejected. With the
ongoing rise of eXplainable AI (XAI), a lot of explanation methodologies for
machine learning based systems have been developed -- explaining reject
options, however, is still a novel field where only very little prior work
exists.
In this work, we propose to explain rejects by semifactual explanations, an
instance of example-based explanation methods, which them self have not been
widely considered in the XAI community yet. We propose a conceptual modeling of
semifactual explanations for arbitrary reject options and empirically evaluate
a specific implementation on a conformal prediction based reject option.
Related papers
- On Generating Monolithic and Model Reconciling Explanations in Probabilistic Scenarios [46.752418052725126]
We propose a novel framework for generating probabilistic monolithic explanations and model reconciling explanations.
For monolithic explanations, our approach integrates uncertainty by utilizing probabilistic logic to increase the probability of the explanandum.
For model reconciling explanations, we propose a framework that extends the logic-based variant of the model reconciliation problem to account for probabilistic human models.
arXiv Detail & Related papers (2024-05-29T16:07:31Z) - Logic-based Explanations for Linear Support Vector Classifiers with Reject Option [0.0]
Support Vector (SVC) is a well-known Machine Learning (ML) model for linear classification problems.
We propose a logic-based approach with formal guarantees on the correctness and minimality of explanations for linear SVCs with reject option.
arXiv Detail & Related papers (2024-03-24T15:14:44Z) - Identifying Drivers of Predictive Aleatoric Uncertainty [2.5311562666866494]
We present a simple approach to explain predictive aleatoric uncertainties.
We estimate uncertainty as predictive variance by adapting a neural network with a Gaussian output distribution.
We quantify our findings with a nuanced benchmark analysis that includes real-world datasets.
arXiv Detail & Related papers (2023-12-12T13:28:53Z) - Sound Explanation for Trustworthy Machine Learning [11.779125616468194]
We argue against the practice of interpreting black-box models via attributing scores to input components.
We then formalize the concept of sound explanation, that has been informally adopted in prior work.
We present the application of feature selection as a sound explanation for cancer prediction models to cultivate trust among clinicians.
arXiv Detail & Related papers (2023-06-08T19:58:30Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Model Agnostic Local Explanations of Reject [6.883906273999368]
The application of machine learning based decision making systems in safety critical areas requires reliable high certainty predictions.
Reject options are a common way of ensuring a sufficiently high certainty of predictions made by the system.
We propose a model agnostic method for locally explaining arbitrary reject options by means of interpretable models and counterfactual explanations.
arXiv Detail & Related papers (2022-05-16T12:42:34Z) - Explaining Reject Options of Learning Vector Quantization Classifiers [6.125017875330933]
We propose to use counterfactual explanations for explaining rejects in machine learning models.
We investigate how to efficiently compute counterfactual explanations of different reject options for an important class of models.
arXiv Detail & Related papers (2022-02-15T08:16:10Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.