Counterfactual Explanations Using Optimization With Constraint Learning
- URL: http://arxiv.org/abs/2209.10997v1
- Date: Thu, 22 Sep 2022 13:27:21 GMT
- Title: Counterfactual Explanations Using Optimization With Constraint Learning
- Authors: Donato Maragno, Tabea E. R\"ober, Ilker Birbil
- Abstract summary: We propose a generic and flexible approach to counterfactual explanations using optimization with constraint learning (CE-OCL)
Specifically, we discuss how we can leverage an optimization with constraint learning framework for the generation of counterfactual explanations.
We also propose two novel modeling approaches to address data manifold closeness and diversity, which are two key criteria for practical counterfactual explanations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Counterfactual explanations embody one of the many interpretability
techniques that receive increasing attention from the machine learning
community. Their potential to make model predictions more sensible to the user
is considered to be invaluable. To increase their adoption in practice, several
criteria that counterfactual explanations should adhere to have been put
forward in the literature. We propose counterfactual explanations using
optimization with constraint learning (CE-OCL), a generic and flexible approach
that addresses all these criteria and allows room for further extensions.
Specifically, we discuss how we can leverage an optimization with constraint
learning framework for the generation of counterfactual explanations, and how
components of this framework readily map to the criteria. We also propose two
novel modeling approaches to address data manifold closeness and diversity,
which are two key criteria for practical counterfactual explanations. We test
CE-OCL on several datasets and present our results in a case study. Compared
against the current state-of-the-art methods, CE-OCL allows for more
flexibility and has an overall superior performance in terms of several
evaluation metrics proposed in related work.
Related papers
- Introducing User Feedback-based Counterfactual Explanations (UFCE) [49.1574468325115]
Counterfactual explanations (CEs) have emerged as a viable solution for generating comprehensible explanations in XAI.
UFCE allows for the inclusion of user constraints to determine the smallest modifications in the subset of actionable features.
UFCE outperforms two well-known CE methods in terms of textitproximity, textitsparsity, and textitfeasibility.
arXiv Detail & Related papers (2024-02-26T20:09:44Z) - Faithful Explanations of Black-box NLP Models Using LLM-generated
Counterfactuals [67.64770842323966]
Causal explanations of predictions of NLP systems are essential to ensure safety and establish trust.
Existing methods often fall short of explaining model predictions effectively or efficiently.
We propose two approaches for counterfactual (CF) approximation.
arXiv Detail & Related papers (2023-10-01T07:31:04Z) - Extended High Utility Pattern Mining: An Answer Set Programming Based
Framework and Applications [0.0]
Rule-based languages like ASP seem well suited for specifying user-provided criteria to assess pattern utility.
We introduce a new framework that allows for new classes of utility criteria not considered in the previous literature.
We exploit it as a building block for the definition of an innovative method for predicting ICU admission for COVID-19 patients.
arXiv Detail & Related papers (2023-03-23T11:42:57Z) - Synergies between Disentanglement and Sparsity: Generalization and
Identifiability in Multi-Task Learning [79.83792914684985]
We prove a new identifiability result that provides conditions under which maximally sparse base-predictors yield disentangled representations.
Motivated by this theoretical result, we propose a practical approach to learn disentangled representations based on a sparsity-promoting bi-level optimization problem.
arXiv Detail & Related papers (2022-11-26T21:02:09Z) - Task-Free Continual Learning via Online Discrepancy Distance Learning [11.540150938141034]
This paper develops a new theoretical analysis framework which provides generalization bounds based on the discrepancy distance between the visited samples and the entire information made available for training the model.
Inspired by this theoretical model, we propose a new approach enabled by the dynamic component expansion mechanism for a mixture model, namely the Online Discrepancy Distance Learning (ODDL)
arXiv Detail & Related papers (2022-10-12T20:44:09Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Optimization with Constraint Learning: A Framework and Survey [0.0]
This paper provides a framework for Optimization with Constraint Learning (OCL)
This framework includes the following steps: (i) setup of the conceptual optimization model, (ii) data gathering and preprocessing, (iii) selection and training of predictive models, (iv) resolution of the optimization model, and (v) verification and improvement of the optimization model.
arXiv Detail & Related papers (2021-10-05T15:42:06Z) - Joint Contrastive Learning with Infinite Possibilities [114.45811348666898]
This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling.
We derive a particular form of contrastive loss named Joint Contrastive Learning (JCL)
arXiv Detail & Related papers (2020-09-30T16:24:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.