Feature-based Learning for Diverse and Privacy-Preserving Counterfactual
Explanations
- URL: http://arxiv.org/abs/2209.13446v5
- Date: Thu, 1 Jun 2023 03:08:00 GMT
- Title: Feature-based Learning for Diverse and Privacy-Preserving Counterfactual
Explanations
- Authors: Vy Vo, Trung Le, Van Nguyen, He Zhao, Edwin Bonilla, Gholamreza
Haffari, Dinh Phung
- Abstract summary: Interpretable machine learning seeks to understand the reasoning process of complex black-box systems.
One flourishing approach is through counterfactual explanations, which provide suggestions on what a user can do to alter an outcome.
- Score: 46.89706747651661
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Interpretable machine learning seeks to understand the reasoning process of
complex black-box systems that are long notorious for lack of explainability.
One flourishing approach is through counterfactual explanations, which provide
suggestions on what a user can do to alter an outcome. Not only must a
counterfactual example counter the original prediction from the black-box
classifier but it should also satisfy various constraints for practical
applications. Diversity is one of the critical constraints that however remains
less discussed. While diverse counterfactuals are ideal, it is computationally
challenging to simultaneously address some other constraints. Furthermore,
there is a growing privacy concern over the released counterfactual data. To
this end, we propose a feature-based learning framework that effectively
handles the counterfactual constraints and contributes itself to the limited
pool of private explanation models. We demonstrate the flexibility and
effectiveness of our method in generating diverse counterfactuals of
actionability and plausibility. Our counterfactual engine is more efficient
than counterparts of the same capacity while yielding the lowest
re-identification risks.
Related papers
- Promoting Counterfactual Robustness through Diversity [10.223545393731115]
Counterfactual explainers may lack robustness in the sense that a minor change in the input can cause a major change in the explanation.
We propose an approximation algorithm that uses a diversity criterion to select a feasible number of most relevant explanations.
arXiv Detail & Related papers (2023-12-11T17:49:25Z) - Flexible and Robust Counterfactual Explanations with Minimal Satisfiable
Perturbations [56.941276017696076]
We propose a conceptually simple yet effective solution named Counterfactual Explanations with Minimal Satisfiable Perturbations (CEMSP)
CEMSP constrains changing values of abnormal features with the help of their semantically meaningful normal ranges.
Compared to existing methods, we conduct comprehensive experiments on both synthetic and real-world datasets to demonstrate that our method provides more robust explanations while preserving flexibility.
arXiv Detail & Related papers (2023-09-09T04:05:56Z) - Endogenous Macrodynamics in Algorithmic Recourse [52.87956177581998]
Existing work on Counterfactual Explanations (CE) and Algorithmic Recourse (AR) has largely focused on single individuals in a static environment.
We show that many of the existing methodologies can be collectively described by a generalized framework.
We then argue that the existing framework does not account for a hidden external cost of recourse, that only reveals itself when studying the endogenous dynamics of recourse at the group level.
arXiv Detail & Related papers (2023-08-16T07:36:58Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Generating robust counterfactual explanations [60.32214822437734]
The quality of a counterfactual depends on several criteria: realism, actionability, validity, robustness, etc.
In this paper, we are interested in the notion of robustness of a counterfactual. More precisely, we focus on robustness to counterfactual input changes.
We propose a new framework, CROCO, that generates robust counterfactuals while managing effectively this trade-off, and guarantees the user a minimal robustness.
arXiv Detail & Related papers (2023-04-24T09:00:31Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Finding Counterfactual Explanations through Constraint Relaxations [6.961253535504979]
Interactive constraint systems often suffer from infeasibility (no solution) due to conflicting user constraints.
A common approach to recover infeasibility is to eliminate the constraints that cause the conflicts in the system.
We propose an iterative method based on conflict detection and maximal relaxations in over-constrained constraint satisfaction problems.
arXiv Detail & Related papers (2022-04-07T13:18:54Z) - Efficient computation of contrastive explanations [8.132423340684568]
We study the relation of contrastive and counterfactual explanations.
We propose a 2-phase algorithm for efficiently computing (plausible) positives of many standard machine learning models.
arXiv Detail & Related papers (2020-10-06T11:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.