Generating collective counterfactual explanations in score-based
classification via mathematical optimization
- URL: http://arxiv.org/abs/2310.12822v1
- Date: Thu, 19 Oct 2023 15:18:42 GMT
- Title: Generating collective counterfactual explanations in score-based
classification via mathematical optimization
- Authors: Emilio Carrizosa, Jasone Ram\'irez-Ayerbe and Dolores Romero Morales
- Abstract summary: A counterfactual explanation of an instance indicates how this instance should be minimally modified so that the perturbed instance is classified in the desired class.
Most of the Counterfactual Analysis literature focuses on the single-instance single-counterfactual setting.
By means of novel Mathematical Optimization models, we provide a counterfactual explanation for each instance in a group of interest.
- Score: 4.281723404774889
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Due to the increasing use of Machine Learning models in high stakes decision
making settings, it has become increasingly important to have tools to
understand how models arrive at decisions. Assuming a trained Supervised
Classification model, explanations can be obtained via counterfactual analysis:
a counterfactual explanation of an instance indicates how this instance should
be minimally modified so that the perturbed instance is classified in the
desired class by the Machine Learning classification model. Most of the
Counterfactual Analysis literature focuses on the single-instance
single-counterfactual setting, in which the analysis is done for one single
instance to provide one single explanation. Taking a stakeholder's perspective,
in this paper we introduce the so-called collective counterfactual
explanations. By means of novel Mathematical Optimization models, we provide a
counterfactual explanation for each instance in a group of interest, so that
the total cost of the perturbations is minimized under some linking
constraints. Making the process of constructing counterfactuals collective
instead of individual enables us to detect the features that are critical to
the entire dataset to have the individuals classified in the desired class. Our
methodology allows for some instances to be treated individually, performing
the collective counterfactual analysis for a fraction of records of the group
of interest. This way, outliers are identified and handled appropriately. Under
some assumptions on the classifier and the space in which counterfactuals are
sought, finding collective counterfactuals is reduced to solving a convex
quadratic linearly constrained mixed integer optimization problem, which, for
datasets of moderate size, can be solved to optimality using existing solvers.
The performance of our approach is illustrated on real-world datasets,
demonstrating its usefulness.
Related papers
- CLIMAX: An exploration of Classifier-Based Contrastive Explanations [5.381004207943597]
We propose a novel post-hoc model XAI technique that provides contrastive explanations justifying the classification of a black box.
Our method, which we refer to as CLIMAX, is based on local classifiers.
We show that we achieve better consistency as compared to baselines such as LIME, BayLIME, and SLIME.
arXiv Detail & Related papers (2023-07-02T22:52:58Z) - VCNet: A self-explaining model for realistic counterfactual generation [52.77024349608834]
Counterfactual explanation is a class of methods to make local explanations of machine learning decisions.
We present VCNet-Variational Counter Net, a model architecture that combines a predictor and a counterfactual generator.
We show that VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem.
arXiv Detail & Related papers (2022-12-21T08:45:32Z) - Supervised Feature Compression based on Counterfactual Analysis [3.2458225810390284]
This work aims to leverage Counterfactual Explanations to detect the important decision boundaries of a pre-trained black-box model.
Using the discretized dataset, an optimal Decision Tree can be trained that resembles the black-box model, but that is interpretable and compact.
arXiv Detail & Related papers (2022-11-17T21:16:14Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Resolving label uncertainty with implicit posterior models [71.62113762278963]
We propose a method for jointly inferring labels across a collection of data samples.
By implicitly assuming the existence of a generative model for which a differentiable predictor is the posterior, we derive a training objective that allows learning under weak beliefs.
arXiv Detail & Related papers (2022-02-28T18:09:44Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Scalable Optimal Classifiers for Adversarial Settings under Uncertainty [10.90668635921398]
We consider the problem of finding optimal classifiers in an adversarial setting where the class-1 data is generated by an attacker whose objective is not known to the defender.
We show that this low-dimensional characterization enables to develop a training method to compute provably approximately optimal classifiers in a scalable manner.
arXiv Detail & Related papers (2021-06-28T13:33:53Z) - From Canonical Correlation Analysis to Self-supervised Graph Neural
Networks [99.44881722969046]
We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data.
We optimize an innovative feature-level objective inspired by classical Canonical Correlation Analysis.
Our method performs competitively on seven public graph datasets.
arXiv Detail & Related papers (2021-06-23T15:55:47Z) - Model-agnostic and Scalable Counterfactual Explanations via
Reinforcement Learning [0.5729426778193398]
We propose a deep reinforcement learning approach that transforms the optimization procedure into an end-to-end learnable process.
Our experiments on real-world data show that our method is model-agnostic, relying only on feedback from model predictions.
arXiv Detail & Related papers (2021-06-04T16:54:36Z) - Model-Agnostic Explanations using Minimal Forcing Subsets [11.420687735660097]
We propose a new model-agnostic algorithm to identify a minimal set of training samples that are indispensable for a given model's decision.
Our algorithm identifies such a set of "indispensable" samples iteratively by solving a constrained optimization problem.
Results show that our algorithm is an effective and easy-to-comprehend tool that helps to better understand local model behavior.
arXiv Detail & Related papers (2020-11-01T22:45:16Z) - Progressive Identification of True Labels for Partial-Label Learning [112.94467491335611]
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
Most existing methods elaborately designed as constrained optimizations that must be solved in specific manners, making their computational complexity a bottleneck for scaling up to big data.
This paper proposes a novel framework of classifier with flexibility on the model and optimization algorithm.
arXiv Detail & Related papers (2020-02-19T08:35:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.