Relevance-aware Algorithmic Recourse
- URL: http://arxiv.org/abs/2405.19072v1
- Date: Wed, 29 May 2024 13:25:49 GMT
- Title: Relevance-aware Algorithmic Recourse
- Authors: Dongwhi Kim, Nuno Moniz,
- Abstract summary: Algorithmic recourse emerges as a tool for clarifying decisions made by predictive models.
Current algorithmic recourse methods treat all domain values equally, which is unrealistic in real-world settings.
We propose a novel framework, Relevance-Aware Algorithmic Recourse (RAAR), that leverages the concept of relevance in applying algorithmic recourse to regression tasks.
- Score: 3.6141428739228894
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As machine learning continues to gain prominence, transparency and explainability are increasingly critical. Without an understanding of these models, they can replicate and worsen human bias, adversely affecting marginalized communities. Algorithmic recourse emerges as a tool for clarifying decisions made by predictive models, providing actionable insights to alter outcomes. They answer, 'What do I have to change?' to achieve the desired result. Despite their importance, current algorithmic recourse methods treat all domain values equally, which is unrealistic in real-world settings. In this paper, we propose a novel framework, Relevance-Aware Algorithmic Recourse (RAAR), that leverages the concept of relevance in applying algorithmic recourse to regression tasks. We conducted multiple experiments on 15 datasets to outline how relevance influences recourses. Results show that relevance contributes algorithmic recourses comparable to well-known baselines, with greater efficiency and lower relative costs.
Related papers
- Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - FAIRLEARN:Configurable and Interpretable Algorithmic Fairness [1.2183405753834557]
There is a need to mitigate any bias arising from either training samples or implicit assumptions made about the data samples.
Many approaches have been proposed to make learning algorithms fair by detecting and mitigating bias in different stages of optimization.
We propose the FAIRLEARN procedure that produces a fair algorithm by incorporating user constraints into the optimization procedure.
arXiv Detail & Related papers (2021-11-17T03:07:18Z) - Explaining Algorithmic Fairness Through Fairness-Aware Causal Path
Decomposition [37.823248189626014]
We propose to study the problem of identification of the source of model disparities.
Unlike existing interpretation methods which typically learn feature importance, we consider the causal relationships among feature variables.
Our framework is also model agnostic and applicable to a variety of quantitative disparity measures.
arXiv Detail & Related papers (2021-08-11T17:23:47Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Coping with Mistreatment in Fair Algorithms [1.2183405753834557]
We study the algorithmic fairness in a supervised learning setting and examine the effect of optimizing a classifier for the Equal Opportunity metric.
We propose a conceptually simple method to mitigate this bias.
We rigorously analyze the proposed method and evaluate it on several real world datasets demonstrating its efficacy.
arXiv Detail & Related papers (2021-02-22T03:26:06Z) - Loss Bounds for Approximate Influence-Based Abstraction [81.13024471616417]
Influence-based abstraction aims to gain leverage by modeling local subproblems together with the 'influence' that the rest of the system exerts on them.
This paper investigates the performance of such approaches from a theoretical perspective.
We show that neural networks trained with cross entropy are well suited to learn approximate influence representations.
arXiv Detail & Related papers (2020-11-03T15:33:10Z) - Metrics and methods for a systematic comparison of fairness-aware
machine learning algorithms [0.0]
This study is the most comprehensive of its kind.
It considers fairness, predictive-performance, calibration quality, and speed of 28 different modelling pipelines.
We also found that fairness-aware algorithms can induce fairness without material drops in predictive power.
arXiv Detail & Related papers (2020-10-08T13:58:09Z) - Learning Unbiased Representations via R\'enyi Minimization [13.61565693336172]
We propose an adversarial algorithm to learn unbiased representations via the Hirschfeld-Gebel-Renyi (HGR) maximal correlation coefficient.
We empirically evaluate and compare our approach and demonstrate significant improvements over existing works in the field.
arXiv Detail & Related papers (2020-09-07T15:48:24Z) - Offline Contextual Bandits with Overparameterized Models [52.788628474552276]
We ask whether the same phenomenon occurs for offline contextual bandits.
We show that this discrepancy is due to the emphaction-stability of their objectives.
In experiments with large neural networks, this gap between action-stable value-based objectives and unstable policy-based objectives leads to significant performance differences.
arXiv Detail & Related papers (2020-06-27T13:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.