Ordered Counterfactual Explanation by Mixed-Integer Linear Optimization
- URL: http://arxiv.org/abs/2012.11782v2
- Date: Sun, 14 Mar 2021 05:47:43 GMT
- Title: Ordered Counterfactual Explanation by Mixed-Integer Linear Optimization
- Authors: Kentaro Kanamori, Takuya Takagi, Ken Kobayashi, Yuichi Ike, Kento
Uemura, Hiroki Arimura
- Abstract summary: We propose a new framework called Ordered Counterfactual Explanation (OrdCE)
We introduce a new objective function that evaluates a pair of an action and an order based on feature interaction.
Numerical experiments on real datasets demonstrated the effectiveness of our OrdCE in comparison with unordered CE methods.
- Score: 10.209615216208888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Post-hoc explanation methods for machine learning models have been widely
used to support decision-making. One of the popular methods is Counterfactual
Explanation (CE), also known as Actionable Recourse, which provides a user with
a perturbation vector of features that alters the prediction result. Given a
perturbation vector, a user can interpret it as an "action" for obtaining one's
desired decision result. In practice, however, showing only a perturbation
vector is often insufficient for users to execute the action. The reason is
that if there is an asymmetric interaction among features, such as causality,
the total cost of the action is expected to depend on the order of changing
features. Therefore, practical CE methods are required to provide an
appropriate order of changing features in addition to a perturbation vector.
For this purpose, we propose a new framework called Ordered Counterfactual
Explanation (OrdCE). We introduce a new objective function that evaluates a
pair of an action and an order based on feature interaction. To extract an
optimal pair, we propose a mixed-integer linear optimization approach with our
objective function. Numerical experiments on real datasets demonstrated the
effectiveness of our OrdCE in comparison with unordered CE methods.
Related papers
- Simplifying debiased inference via automatic differentiation and probabilistic programming [1.0152838128195467]
'Dimple' takes as input computer code representing a parameter of interest and outputs an efficient estimator.
We provide a proof-of-concept Python implementation and showcase through examples how it allows users to go from parameter specification to efficient estimation with just a few lines of code.
arXiv Detail & Related papers (2024-05-14T14:56:54Z) - Introducing User Feedback-based Counterfactual Explanations (UFCE) [49.1574468325115]
Counterfactual explanations (CEs) have emerged as a viable solution for generating comprehensible explanations in XAI.
UFCE allows for the inclusion of user constraints to determine the smallest modifications in the subset of actionable features.
UFCE outperforms two well-known CE methods in terms of textitproximity, textitsparsity, and textitfeasibility.
arXiv Detail & Related papers (2024-02-26T20:09:44Z) - Adaptive Ensemble Q-learning: Minimizing Estimation Bias via Error
Feedback [31.115084475673793]
The ensemble method is a promising way to mitigate the overestimation issue in Q-learning.
It is known that the estimation bias hinges heavily on the ensemble size.
We devise an ensemble method with two key steps: (a) approximation error characterization which serves as the feedback for flexibly controlling the ensemble size, and (b) ensemble size adaptation tailored towards minimizing the estimation bias.
arXiv Detail & Related papers (2023-06-20T22:06:14Z) - Finding Regions of Counterfactual Explanations via Robust Optimization [0.0]
A counterfactual explanation (CE) is a minimal perturbed data point for which the decision of the model changes.
Most of the existing methods can only provide one CE, which may not be achievable for the user.
We derive an iterative method to calculate robust CEs that remain valid even after the features are slightly perturbed.
arXiv Detail & Related papers (2023-01-26T14:06:26Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Meta-Wrapper: Differentiable Wrapping Operator for User Interest
Selection in CTR Prediction [97.99938802797377]
Click-through rate (CTR) prediction, whose goal is to predict the probability of the user to click on an item, has become increasingly significant in recommender systems.
Recent deep learning models with the ability to automatically extract the user interest from his/her behaviors have achieved great success.
We propose a novel approach under the framework of the wrapper method, which is named Meta-Wrapper.
arXiv Detail & Related papers (2022-06-28T03:28:15Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Extension of Dynamic Mode Decomposition for dynamic systems with
incomplete information based on t-model of optimal prediction [69.81996031777717]
The Dynamic Mode Decomposition has proved to be a very efficient technique to study dynamic data.
The application of this approach becomes problematic if the available data is incomplete because some dimensions of smaller scale either missing or unmeasured.
We consider a first-order approximation of the Mori-Zwanzig decomposition, state the corresponding optimization problem and solve it with the gradient-based optimization method.
arXiv Detail & Related papers (2022-02-23T11:23:59Z) - Causality-based Counterfactual Explanation for Classification Models [11.108866104714627]
We propose a prototype-based counterfactual explanation framework (ProCE)
ProCE is capable of preserving the causal relationship underlying the features of the counterfactual data.
In addition, we design a novel gradient-free optimization based on the multi-objective genetic algorithm that generates the counterfactual explanations.
arXiv Detail & Related papers (2021-05-03T09:25:59Z) - Consequence-aware Sequential Counterfactual Generation [5.71097144710995]
We propose a model-agnostic method for sequential counterfactual generation.
Our approach generates less costly solutions, is more efficient, and provides the user with a diverse set of solutions to choose from.
arXiv Detail & Related papers (2021-04-12T16:10:03Z) - Fast Rates for Contextual Linear Optimization [52.39202699484225]
We show that a naive plug-in approach achieves regret convergence rates that are significantly faster than methods that directly optimize downstream decision performance.
Our results are overall positive for practice: predictive models are easy and fast to train using existing tools, simple to interpret, and, as we show, lead to decisions that perform very well.
arXiv Detail & Related papers (2020-11-05T18:43:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.