A PSO Based Method to Generate Actionable Counterfactuals for High
Dimensional Data
- URL: http://arxiv.org/abs/2311.12825v2
- Date: Thu, 30 Nov 2023 06:12:38 GMT
- Title: A PSO Based Method to Generate Actionable Counterfactuals for High
Dimensional Data
- Authors: Shashank Shekhar, Asif Salim, Adesh Bansode, Vivaswan Jinturkar,
Anirudha Nayak
- Abstract summary: We describe an efficient and an actionable counterfactual (CF) generation method based on particle swarm optimization (PSO)
An algorithm is proposed that incorporates these features and it enables greater control over the proximity and sparsity properties over the generated CFs.
- Score: 3.0320603363468845
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Counterfactual explanations (CFE) are methods that explain a machine learning
model by giving an alternate class prediction of a data point with some minimal
changes in its features. It helps the users to identify their data attributes
that caused an undesirable prediction like a loan or credit card rejection. We
describe an efficient and an actionable counterfactual (CF) generation method
based on particle swarm optimization (PSO). We propose a simple objective
function for the optimization of the instance-centric CF generation problem.
The PSO brings in a lot of flexibility in terms of carrying out multi-objective
optimization in large dimensions, capability for multiple CF generation, and
setting box constraints or immutability of data attributes. An algorithm is
proposed that incorporates these features and it enables greater control over
the proximity and sparsity properties over the generated CFs. The proposed
algorithm is evaluated with a set of action-ability metrics in real-world
datasets, and the results were superior compared to that of the
state-of-the-arts.
Related papers
- Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - Improved Distribution Matching for Dataset Condensation [91.55972945798531]
We propose a novel dataset condensation method based on distribution matching.
Our simple yet effective method outperforms most previous optimization-oriented methods with much fewer computational resources.
arXiv Detail & Related papers (2023-07-19T04:07:33Z) - Finding Regions of Counterfactual Explanations via Robust Optimization [0.0]
A counterfactual explanation (CE) is a minimal perturbed data point for which the decision of the model changes.
Most of the existing methods can only provide one CE, which may not be achievable for the user.
We derive an iterative method to calculate robust CEs that remain valid even after the features are slightly perturbed.
arXiv Detail & Related papers (2023-01-26T14:06:26Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - DualCF: Efficient Model Extraction Attack from Counterfactual
Explanations [57.46134660974256]
Cloud service providers have launched Machine-Learning-as-a-Service platforms to allow users to access large-scale cloudbased models via APIs.
Such extra information inevitably causes the cloud models to be more vulnerable to extraction attacks.
We propose a novel simple yet efficient querying strategy to greatly enhance the querying efficiency to steal a classification model.
arXiv Detail & Related papers (2022-05-13T08:24:43Z) - ReLACE: Reinforcement Learning Agent for Counterfactual Explanations of
Arbitrary Predictive Models [6.939617874336667]
We introduce a model-agnostic algorithm to generate optimal counterfactual explanations.
Our method is easily applied to any black-box model, as this resembles the environment that the DRL agent interacts with.
In addition, we develop an algorithm to extract explainable decision rules from the DRL agent's policy, so as to make the process of generating CFs itself transparent.
arXiv Detail & Related papers (2021-10-22T17:08:49Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - Efficient Data-specific Model Search for Collaborative Filtering [56.60519991956558]
Collaborative filtering (CF) is a fundamental approach for recommender systems.
In this paper, motivated by the recent advances in automated machine learning (AutoML), we propose to design a data-specific CF model.
Key here is a new framework that unifies state-of-the-art (SOTA) CF methods and splits them into disjoint stages of input encoding, embedding function, interaction and prediction function.
arXiv Detail & Related papers (2021-06-14T14:30:32Z) - Model-agnostic and Scalable Counterfactual Explanations via
Reinforcement Learning [0.5729426778193398]
We propose a deep reinforcement learning approach that transforms the optimization procedure into an end-to-end learnable process.
Our experiments on real-world data show that our method is model-agnostic, relying only on feedback from model predictions.
arXiv Detail & Related papers (2021-06-04T16:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.