Amortized Generation of Sequential Counterfactual Explanations for
Black-box Models
- URL: http://arxiv.org/abs/2106.03962v1
- Date: Mon, 7 Jun 2021 20:54:48 GMT
- Title: Amortized Generation of Sequential Counterfactual Explanations for
Black-box Models
- Authors: Sahil Verma, Keegan Hines, John P. Dickerson
- Abstract summary: Counterfactual explanations (CFEs) provide what if'' feedback of a form.
Current CFE approaches are single shot -- that is, they assume $x$ can change to $x'$ in a single time period.
We propose a novel approach that generates sequential CFEs that allow $x$ to move across intermediate states to a final state $x'$.
- Score: 26.91950709495675
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Explainable machine learning (ML) has gained traction in recent years due to
the increasing adoption of ML-based systems in many sectors. Counterfactual
explanations (CFEs) provide ``what if'' feedback of the form ``if an input
datapoint were $x'$ instead of $x$, then an ML-based system's output would be
$y'$ instead of $y$.'' CFEs are attractive due to their actionable feedback,
amenability to existing legal frameworks, and fidelity to the underlying ML
model. Yet, current CFE approaches are single shot -- that is, they assume $x$
can change to $x'$ in a single time period. We propose a novel
stochastic-control-based approach that generates sequential CFEs, that is, CFEs
that allow $x$ to move stochastically and sequentially across intermediate
states to a final state $x'$. Our approach is model agnostic and black box.
Furthermore, calculation of CFEs is amortized such that once trained, it
applies to multiple datapoints without the need for re-optimization. In
addition to these primary characteristics, our approach admits optional
desiderata such as adherence to the data manifold, respect for causal
relations, and sparsity -- identified by past research as desirable properties
of CFEs. We evaluate our approach using three real-world datasets and show
successful generation of sequential CFEs that respect other counterfactual
desiderata.
Related papers
- Counterfactual Explanations with Probabilistic Guarantees on their Robustness to Model Change [4.239829789304117]
Counterfactual explanations (CFEs) guide users on how to adjust inputs to machine learning models to achieve desired outputs.
Current methods addressing this issue often support only specific models or change types.
This paper proposes a novel approach for generating CFEs that provides probabilistic guarantees for any model and change type.
arXiv Detail & Related papers (2024-08-09T03:35:53Z) - Counterfactual Explanations for Multivariate Time-Series without Training Datasets [4.039558709616107]
We present CFWoT, a novel reinforcement-learning-based CFE method that generates CFEs when training datasets are unavailable.
We demonstrate the performance of CFWoT against four baselines on several datasets.
arXiv Detail & Related papers (2024-05-28T20:15:09Z) - Online non-parametric likelihood-ratio estimation by Pearson-divergence
functional minimization [55.98760097296213]
We introduce a new framework for online non-parametric LRE (OLRE) for the setting where pairs of iid observations $(x_t sim p, x'_t sim q)$ are observed over time.
We provide theoretical guarantees for the performance of the OLRE method along with empirical validation in synthetic experiments.
arXiv Detail & Related papers (2023-11-03T13:20:11Z) - Flexible and Robust Counterfactual Explanations with Minimal Satisfiable
Perturbations [56.941276017696076]
We propose a conceptually simple yet effective solution named Counterfactual Explanations with Minimal Satisfiable Perturbations (CEMSP)
CEMSP constrains changing values of abnormal features with the help of their semantically meaningful normal ranges.
Compared to existing methods, we conduct comprehensive experiments on both synthetic and real-world datasets to demonstrate that our method provides more robust explanations while preserving flexibility.
arXiv Detail & Related papers (2023-09-09T04:05:56Z) - CPPF++: Uncertainty-Aware Sim2Real Object Pose Estimation by Vote Aggregation [67.12857074801731]
We introduce a novel method, CPPF++, designed for sim-to-real pose estimation.
To address the challenge posed by vote collision, we propose a novel approach that involves modeling the voting uncertainty.
We incorporate several innovative modules, including noisy pair filtering, online alignment optimization, and a feature ensemble.
arXiv Detail & Related papers (2022-11-24T03:27:00Z) - FeDXL: Provable Federated Learning for Deep X-Risk Optimization [105.17383135458897]
We tackle a novel federated learning (FL) problem for optimizing a family of X-risks, to which no existing algorithms are applicable.
The challenges for designing an FL algorithm for X-risks lie in the non-decomability of the objective over multiple machines and the interdependency between different machines.
arXiv Detail & Related papers (2022-10-26T00:23:36Z) - LIFE: Learning Individual Features for Multivariate Time Series
Prediction with Missing Values [71.52335136040664]
We propose a Learning Individual Features (LIFE) framework, which provides a new paradigm for MTS prediction with missing values.
LIFE generates reliable features for prediction by using the correlated dimensions as auxiliary information and suppressing the interference from uncorrelated dimensions with missing values.
Experiments on three real-world data sets verify the superiority of LIFE to existing state-of-the-art models.
arXiv Detail & Related papers (2021-09-30T04:53:24Z) - CounterNet: End-to-End Training of Prediction Aware Counterfactual
Explanations [12.313007847721215]
CounterNet is an end-to-end learning framework which integrates predictive model training and the generation of counterfactual (CF) explanations.
Unlike post-hoc methods, CounterNet enables the optimization of the CF explanation generation only once together with the predictive model.
Our experiments on multiple real-world datasets show that CounterNet generates high-quality predictions.
arXiv Detail & Related papers (2021-09-15T20:09:13Z) - Counterfactual Explanations for Machine Learning: Challenges Revisited [6.939768185086755]
Counterfactual explanations (CFEs) are an emerging technique under the umbrella of interpretability of machine learning (ML) models.
They provide what if'' feedback of the form if an input datapoint were $x'$ instead of $x$, then an ML model's output would be $y'$ instead of $y$.
arXiv Detail & Related papers (2021-06-14T20:56:37Z) - Model-Augmented Q-learning [112.86795579978802]
We propose a MFRL framework that is augmented with the components of model-based RL.
Specifically, we propose to estimate not only the $Q$-values but also both the transition and the reward with a shared network.
We show that the proposed scheme, called Model-augmented $Q$-learning (MQL), obtains a policy-invariant solution which is identical to the solution obtained by learning with true reward.
arXiv Detail & Related papers (2021-02-07T17:56:50Z) - Competition analysis on the over-the-counter credit default swap market [0.0]
We study the competition between central counterparties through collateral requirements.
We present models that successfully estimate the initial margin requirements.
Second, we model counterpart choice on the interdealer market using a novel semi-supervised predictive task.
arXiv Detail & Related papers (2020-12-03T13:02:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.