Beyond Individualized Recourse: Interpretable and Interactive Summaries
of Actionable Recourses
- URL: http://arxiv.org/abs/2009.07165v3
- Date: Wed, 28 Oct 2020 19:22:25 GMT
- Title: Beyond Individualized Recourse: Interpretable and Interactive Summaries
of Actionable Recourses
- Authors: Kaivalya Rawal, Himabindu Lakkaraju
- Abstract summary: We propose a novel model framework called Actionable Recourse agnostic (AReS) to construct global counterfactual explanations.
We formulate a novel objective which simultaneously optimize for correctness of the recourses and interpretability of the explanations.
Our framework can provide decision makers with a comprehensive overview of recourses corresponding to any black box model.
- Score: 14.626432428431594
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As predictive models are increasingly being deployed in high-stakes
decision-making, there has been a lot of interest in developing algorithms
which can provide recourses to affected individuals. While developing such
tools is important, it is even more critical to analyse and interpret a
predictive model, and vet it thoroughly to ensure that the recourses it offers
are meaningful and non-discriminatory before it is deployed in the real world.
To this end, we propose a novel model agnostic framework called Actionable
Recourse Summaries (AReS) to construct global counterfactual explanations which
provide an interpretable and accurate summary of recourses for the entire
population. We formulate a novel objective which simultaneously optimizes for
correctness of the recourses and interpretability of the explanations, while
minimizing overall recourse costs across the entire population. More
specifically, our objective enables us to learn, with optimality guarantees on
recourse correctness, a small number of compact rule sets each of which capture
recourses for well defined subpopulations within the data. We also demonstrate
theoretically that several of the prior approaches proposed to generate
recourses for individuals are special cases of our framework. Experimental
evaluation with real world datasets and user studies demonstrate that our
framework can provide decision makers with a comprehensive overview of
recourses corresponding to any black box model, and consequently help detect
undesirable model biases and discrimination.
Related papers
- Feature Responsiveness Scores: Model-Agnostic Explanations for Recourse [7.730963708373791]
Consumer protection rules mandate that we provide a list of "principal reasons" to consumers who receive adverse decisions.
In practice, lenders and employers identify principal reasons by returning the top-scoring features from a feature attribution method.
We show that standard attribution methods can mislead individuals by highlighting reasons without recourse.
We propose to address these issues by scoring features on the basis of responsiveness.
arXiv Detail & Related papers (2024-10-29T23:37:49Z) - Enhancement of Approximation Spaces by the Use of Primals and Neighborhood [0.0]
We introduce four new generalized rough set models that draw inspiration from "neighborhoods and primals"
We claim that the current models can preserve nearly all significant aspects associated with the rough set model.
We also demonstrate that the new strategy we define for our everyday health-related problem yields more accurate findings.
arXiv Detail & Related papers (2024-10-23T18:49:13Z) - On Discriminative Probabilistic Modeling for Self-Supervised Representation Learning [85.75164588939185]
We study the discriminative probabilistic modeling problem on a continuous domain for (multimodal) self-supervised representation learning.
We conduct generalization error analysis to reveal the limitation of current InfoNCE-based contrastive loss for self-supervised representation learning.
arXiv Detail & Related papers (2024-10-11T18:02:46Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Fair Multivariate Adaptive Regression Splines for Ensuring Equity and
Transparency [1.124958340749622]
We propose a fair predictive model based on MARS that incorporates fairness measures in the learning process.
MARS is a non-parametric regression model that performs feature selection, handles non-linear relationships, generates interpretable decision rules, and derives optimal splitting criteria on the variables.
We apply our fairMARS model to real-world data and demonstrate its effectiveness in terms of accuracy and equity.
arXiv Detail & Related papers (2024-02-23T19:02:24Z) - When Demonstrations Meet Generative World Models: A Maximum Likelihood
Framework for Offline Inverse Reinforcement Learning [62.00672284480755]
This paper aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent.
Accurate models of expertise in executing a task has applications in safety-sensitive applications such as clinical decision making and autonomous driving.
arXiv Detail & Related papers (2023-02-15T04:14:20Z) - BRIO: Bringing Order to Abstractive Summarization [107.97378285293507]
We propose a novel training paradigm which assumes a non-deterministic distribution.
Our method achieves a new state-of-the-art result on the CNN/DailyMail (47.78 ROUGE-1) and XSum (49.07 ROUGE-1) datasets.
arXiv Detail & Related papers (2022-03-31T05:19:38Z) - Probabilistically Robust Recourse: Navigating the Trade-offs between
Costs and Robustness in Algorithmic Recourse [34.39887495671287]
We propose an objective function which simultaneously minimizes the gap between the achieved (resulting) and desired recourse invalidation rates.
We develop novel theoretical results to characterize the recourse invalidation rates corresponding to any given instance.
Experimental evaluation with multiple real world datasets demonstrates the efficacy of the proposed framework.
arXiv Detail & Related papers (2022-03-13T21:39:24Z) - Fairness-aware Summarization for Justified Decision-Making [16.47665757950391]
We focus on the problem of (un)fairness in the justification of the text-based neural models.
We propose a fairness-aware summarization mechanism to detect and counteract the bias in such models.
arXiv Detail & Related papers (2021-07-13T17:04:10Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.