CoGS: Model Agnostic Causality Constrained Counterfactual Explanations using goal-directed ASP
- URL: http://arxiv.org/abs/2410.22615v1
- Date: Wed, 30 Oct 2024 00:43:01 GMT
- Title: CoGS: Model Agnostic Causality Constrained Counterfactual Explanations using goal-directed ASP
- Authors: Sopam Dasgupta, JoaquĆn Arias, Elmer Salazar, Gopal Gupta,
- Abstract summary: CoGS is a model-agnostic framework capable of generating counterfactual explanations for classification models.
CoGS offers interpretable and actionable explanations of the changes required to achieve the desired outcome.
- Score: 1.5749416770494706
- License:
- Abstract: Machine learning models are increasingly used in critical areas such as loan approvals and hiring, yet they often function as black boxes, obscuring their decision-making processes. Transparency is crucial, as individuals need explanations to understand decisions, primarily if the decisions result in an undesired outcome. Our work introduces CoGS (Counterfactual Generation with s(CASP)), a model-agnostic framework capable of generating counterfactual explanations for classification models. CoGS leverages the goal-directed Answer Set Programming system s(CASP) to compute realistic and causally consistent modifications to feature values, accounting for causal dependencies between them. By using rule-based machine learning algorithms (RBML), notably the FOLD-SE algorithm, CoGS extracts the underlying logic of a statistical model to generate counterfactual solutions. By tracing a step-by-step path from an undesired outcome to a desired one, CoGS offers interpretable and actionable explanations of the changes required to achieve the desired outcome. We present details of the CoGS framework along with its evaluation.
Related papers
- Self-Healing Machine Learning: A Framework for Autonomous Adaptation in Real-World Environments [50.310636905746975]
Real-world machine learning systems often encounter model performance degradation due to distributional shifts in the underlying data generating process.
Existing approaches to addressing shifts, such as concept drift adaptation, are limited by their reason-agnostic nature.
We propose self-healing machine learning (SHML) to overcome these limitations.
arXiv Detail & Related papers (2024-10-31T20:05:51Z) - CoGS: Causality Constrained Counterfactual Explanations using goal-directed ASP [1.5749416770494706]
We present the CoGS (Counterfactual Generation with s(CASP)) framework to generate counterfactuals from rule-based machine learning models.
CoGS computes realistic and causally consistent changes to attribute values taking causal dependencies between them into account.
It finds a path from an undesired outcome to a desired one using counterfactuals.
arXiv Detail & Related papers (2024-07-11T04:50:51Z) - CFGs: Causality Constrained Counterfactual Explanations using goal-directed ASP [1.5749416770494706]
We present the framework CFGs, CounterFactual Generation with s(CASP), which utilizes the goal-directed Answer Set Programming (ASP) system s(CASP) to automatically generate counterfactual explanations.
We show how CFGs navigates between these worlds, namely, go from our initial state where we obtain an undesired outcome to the imagined goal state where we obtain the desired decision.
arXiv Detail & Related papers (2024-05-24T21:47:58Z) - Value-Distributional Model-Based Reinforcement Learning [59.758009422067]
Quantifying uncertainty about a policy's long-term performance is important to solve sequential decision-making tasks.
We study the problem from a model-based Bayesian reinforcement learning perspective.
We propose Epistemic Quantile-Regression (EQR), a model-based algorithm that learns a value distribution function.
arXiv Detail & Related papers (2023-08-12T14:59:19Z) - ReCOGS: How Incidental Details of a Logical Form Overshadow an
Evaluation of Semantic Interpretation [63.33465936588327]
We propose a modified version of the compositional generalization benchmark COGS.
Our results reaffirm the importance of compositional generalization and careful benchmark task design.
arXiv Detail & Related papers (2023-03-24T00:01:24Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Adaptive Fine-Grained Predicates Learning for Scene Graph Generation [122.4588401267544]
General Scene Graph Generation (SGG) models tend to predict head predicates and re-balancing strategies prefer tail categories.
We propose an Adaptive Fine-Grained Predicates Learning (FGPL-A) which aims at differentiating hard-to-distinguish predicates for SGG.
Our proposed model-agnostic strategy significantly boosts performance of benchmark models on VG-SGG and GQA-SGG datasets by up to 175% and 76% on Mean Recall@100, achieving new state-of-the-art performance.
arXiv Detail & Related papers (2022-07-11T03:37:57Z) - Towards Dynamic Consistency Checking in Goal-directed Predicate Answer
Set Programming [2.3204178451683264]
We present a variation of the top-down evaluation strategy, termed Dynamic Consistency checking.
This makes it possible to determine when a literal is not compatible with the denials associated to the global constraints in the program.
We have experimentally observed speedups of up to 90x w.r.t. the standard versions of s(CASP)
arXiv Detail & Related papers (2021-10-22T20:38:48Z) - CARE: Coherent Actionable Recourse based on Sound Counterfactual
Explanations [0.0]
This paper introduces CARE, a modular explanation framework that addresses the model- and user-level desiderata.
As a model-agnostic approach, CARE generates multiple, diverse explanations for any black-box model.
arXiv Detail & Related papers (2021-08-18T15:26:59Z) - Interpretable Learning-to-Rank with Generalized Additive Models [78.42800966500374]
Interpretability of learning-to-rank models is a crucial yet relatively under-examined research area.
Recent progress on interpretable ranking models largely focuses on generating post-hoc explanations for existing black-box ranking models.
We lay the groundwork for intrinsically interpretable learning-to-rank by introducing generalized additive models (GAMs) into ranking tasks.
arXiv Detail & Related papers (2020-05-06T01:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.