Finding Regions of Counterfactual Explanations via Robust Optimization
- URL: http://arxiv.org/abs/2301.11113v3
- Date: Thu, 26 Oct 2023 12:49:51 GMT
- Title: Finding Regions of Counterfactual Explanations via Robust Optimization
- Authors: Donato Maragno, Jannis Kurtz, Tabea E. R\"ober, Rob Goedhart, \c{S}.
Ilker Birbil, Dick den Hertog
- Abstract summary: A counterfactual explanation (CE) is a minimal perturbed data point for which the decision of the model changes.
Most of the existing methods can only provide one CE, which may not be achievable for the user.
We derive an iterative method to calculate robust CEs that remain valid even after the features are slightly perturbed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Counterfactual explanations play an important role in detecting bias and
improving the explainability of data-driven classification models. A
counterfactual explanation (CE) is a minimal perturbed data point for which the
decision of the model changes. Most of the existing methods can only provide
one CE, which may not be achievable for the user. In this work we derive an
iterative method to calculate robust CEs, i.e. CEs that remain valid even after
the features are slightly perturbed. To this end, our method provides a whole
region of CEs allowing the user to choose a suitable recourse to obtain a
desired outcome. We use algorithmic ideas from robust optimization and prove
convergence results for the most common machine learning methods including
logistic regression, decision trees, random forests, and neural networks. Our
experiments show that our method can efficiently generate globally optimal
robust CEs for a variety of common data sets and classification models.
Related papers
- Interval Abstractions for Robust Counterfactual Explanations [15.954944873701503]
Counterfactual Explanations (CEs) have emerged as a major paradigm in explainable AI research.
Existing methods often become invalid when slight changes occur in the parameters of the model they were generated for.
We propose a novel interval abstraction technique for machine learning models, which allows us to obtain provable robustness guarantees.
arXiv Detail & Related papers (2024-04-21T18:24:34Z) - Introducing User Feedback-based Counterfactual Explanations (UFCE) [49.1574468325115]
Counterfactual explanations (CEs) have emerged as a viable solution for generating comprehensible explanations in XAI.
UFCE allows for the inclusion of user constraints to determine the smallest modifications in the subset of actionable features.
UFCE outperforms two well-known CE methods in terms of textitproximity, textitsparsity, and textitfeasibility.
arXiv Detail & Related papers (2024-02-26T20:09:44Z) - A PSO Based Method to Generate Actionable Counterfactuals for High
Dimensional Data [3.0320603363468845]
We describe an efficient and an actionable counterfactual (CF) generation method based on particle swarm optimization (PSO)
An algorithm is proposed that incorporates these features and it enables greater control over the proximity and sparsity properties over the generated CFs.
arXiv Detail & Related papers (2023-09-30T18:08:00Z) - Provably Robust and Plausible Counterfactual Explanations for Neural Networks via Robust Optimisation [19.065904250532995]
We propose Provably RObust and PLAusible Counterfactual Explanations (PROPLACE)
We formulate an iterative algorithm to compute provably robust CEs and prove its convergence, soundness and completeness.
We show that PROPLACE achieves state-of-the-art performances against metrics on three evaluation aspects.
arXiv Detail & Related papers (2023-09-22T00:12:09Z) - Counterfactual Explanation via Search in Gaussian Mixture Distributed
Latent Space [19.312306559210125]
Counterfactual Explanations (CEs) are an important tool in Algorithmic Recourse for addressing two questions.
guiding the user's interaction with AI systems by proposing easy-to-understand explanations is essential for the trustworthy adoption and long-term acceptance of AI systems.
We introduce a new method to generate CEs for a pre-trained binary classifier by first shaping the latent space of an autoencoder to be a mixture of Gaussian distributions.
arXiv Detail & Related papers (2023-07-25T10:21:26Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Evaluating State-of-the-Art Classification Models Against Bayes
Optimality [106.50867011164584]
We show that we can compute the exact Bayes error of generative models learned using normalizing flows.
We use our approach to conduct a thorough investigation of state-of-the-art classification models.
arXiv Detail & Related papers (2021-06-07T06:21:20Z) - Probabilistic Case-based Reasoning for Open-World Knowledge Graph
Completion [59.549664231655726]
A case-based reasoning (CBR) system solves a new problem by retrieving cases' that are similar to the given problem.
In this paper, we demonstrate that such a system is achievable for reasoning in knowledge-bases (KBs)
Our approach predicts attributes for an entity by gathering reasoning paths from similar entities in the KB.
arXiv Detail & Related papers (2020-10-07T17:48:12Z) - Automatic selection of basis-adaptive sparse polynomial chaos expansions
for engineering applications [0.0]
We describe three state-of-the-art basis-adaptive approaches for sparse chaos expansions.
We conduct an extensive benchmark in terms of global approximation accuracy on a large set of computational models.
We introduce a novel solver and basis adaptivity selection scheme guided by cross-validation error.
arXiv Detail & Related papers (2020-09-10T12:13:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.