counterfactuals: An R Package for Counterfactual Explanation Methods
- URL: http://arxiv.org/abs/2304.06569v2
- Date: Fri, 15 Sep 2023 19:01:33 GMT
- Title: counterfactuals: An R Package for Counterfactual Explanation Methods
- Authors: Susanne Dandl, Andreas Hofheinz, Martin Binder, Bernd Bischl, Giuseppe
Casalicchio
- Abstract summary: We introduce the counterfactuals R package, which provides a modular and unified interface for counterfactual explanation methods.
We implement three existing counterfactual explanation methods and propose some optional methodological extensions.
We show how to integrate additional counterfactual explanation methods into the package.
- Score: 9.505961054570523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Counterfactual explanation methods provide information on how feature values
of individual observations must be changed to obtain a desired prediction.
Despite the increasing amount of proposed methods in research, only a few
implementations exist whose interfaces and requirements vary widely. In this
work, we introduce the counterfactuals R package, which provides a modular and
unified R6-based interface for counterfactual explanation methods. We
implemented three existing counterfactual explanation methods and propose some
optional methodological extensions to generalize these methods to different
scenarios and to make them more comparable. We explain the structure and
workflow of the package using real use cases and show how to integrate
additional counterfactual explanation methods into the package. In addition, we
compared the implemented methods for a variety of models and datasets with
regard to the quality of their counterfactual explanations and their runtime
behavior.
Related papers
- Which Explanation Should I Choose? A Function Approximation Perspective
to Characterizing Post hoc Explanations [16.678003262147346]
We show that popular explanation methods are instances of the local function approximation (LFA) framework.
We set forth a guiding principle based on the function approximation perspective, considering a method to be effective if it recovers the underlying model.
We empirically validate our theoretical results using various real world datasets, model classes, and prediction tasks.
arXiv Detail & Related papers (2022-06-02T19:09:30Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - CARLA: A Python Library to Benchmark Algorithmic Recourse and
Counterfactual Explanation Algorithms [6.133522864509327]
CARLA (Counterfactual And Recourse LibrAry) is a python library for benchmarking counterfactual explanation methods.
We provide an extensive benchmark of 11 popular counterfactual explanation methods.
We also provide a benchmarking framework for research on future counterfactual explanation methods.
arXiv Detail & Related papers (2021-08-02T11:00:43Z) - Multivariate Data Explanation by Jumping Emerging Patterns Visualization [78.6363825307044]
We present VAX (multiVariate dAta eXplanation), a new VA method to support the identification and visual interpretation of patterns in multivariate data sets.
Unlike the existing similar approaches, VAX uses the concept of Jumping Emerging Patterns to identify and aggregate several diversified patterns, producing explanations through logic combinations of data variables.
arXiv Detail & Related papers (2021-06-21T13:49:44Z) - Evaluating Explanations: How much do explanations from the teacher aid
students? [103.05037537415811]
We formalize the value of explanations using a student-teacher paradigm that measures the extent to which explanations improve student models in learning.
Unlike many prior proposals to evaluate explanations, our approach cannot be easily gamed, enabling principled, scalable, and automatic evaluation of attributions.
arXiv Detail & Related papers (2020-12-01T23:40:21Z) - Explaining by Removing: A Unified Framework for Model Explanation [14.50261153230204]
Removal-based explanations are based on the principle of simulating feature removal to quantify each feature's influence.
We develop a framework that characterizes each method along three dimensions: 1) how the method removes features, 2) what model behavior the method explains, and 3) how the method summarizes each feature's influence.
This newly understood class of explanation methods has rich connections that we examine using tools that have been largely overlooked by the explainability literature.
arXiv Detail & Related papers (2020-11-21T00:47:48Z) - Feature Removal Is a Unifying Principle for Model Explanation Methods [14.50261153230204]
We examine the literature and find that many methods are based on a shared principle of explaining by removing.
We develop a framework for removal-based explanations that characterizes each method along three dimensions.
Our framework unifies 26 existing methods, including several of the most widely used approaches.
arXiv Detail & Related papers (2020-11-06T22:37:55Z) - Evaluating the Disentanglement of Deep Generative Models through
Manifold Topology [66.06153115971732]
We present a method for quantifying disentanglement that only uses the generative model.
We empirically evaluate several state-of-the-art models across multiple datasets.
arXiv Detail & Related papers (2020-06-05T20:54:11Z) - Evaluating Explainable AI: Which Algorithmic Explanations Help Users
Predict Model Behavior? [97.77183117452235]
We carry out human subject tests to isolate the effect of algorithmic explanations on model interpretability.
Clear evidence of method effectiveness is found in very few cases.
Our results provide the first reliable and comprehensive estimates of how explanations influence simulatability.
arXiv Detail & Related papers (2020-05-04T20:35:17Z) - Multi-Objective Counterfactual Explanations [0.7349727826230864]
We propose the Multi-Objective Counterfactuals (MOC) method, which translates the counterfactual search into a multi-objective optimization problem.
Our approach not only returns a diverse set of counterfactuals with different trade-offs between the proposed objectives, but also maintains diversity in feature space.
arXiv Detail & Related papers (2020-04-23T13:56:39Z) - There and Back Again: Revisiting Backpropagation Saliency Methods [87.40330595283969]
Saliency methods seek to explain the predictions of a model by producing an importance map across each input sample.
A popular class of such methods is based on backpropagating a signal and analyzing the resulting gradient.
We propose a single framework under which several such methods can be unified.
arXiv Detail & Related papers (2020-04-06T17:58:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.