Fairness Aware Counterfactuals for Subgroups
- URL: http://arxiv.org/abs/2306.14978v1
- Date: Mon, 26 Jun 2023 18:03:56 GMT
- Title: Fairness Aware Counterfactuals for Subgroups
- Authors: Loukas Kavouras, Konstantinos Tsopelas, Giorgos Giannopoulos, Dimitris
Sacharidis, Eleni Psaroudaki, Nikolaos Theologitis, Dimitrios Rontogiannis,
Dimitris Fotakis, Ioannis Emiris
- Abstract summary: We present Fairness Aware Counterfactuals for Subgroups (FACTS), a framework for auditing subgroup fairness through counterfactual explanations.
We aim to formulate different aspects of the difficulty of individuals in certain subgroups to achieve recourse.
We introduce notions of subgroup fairness that are robust, if not totally oblivious, to the cost of achieving recourse.
- Score: 8.593488857185678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we present Fairness Aware Counterfactuals for Subgroups
(FACTS), a framework for auditing subgroup fairness through counterfactual
explanations. We start with revisiting (and generalizing) existing notions and
introducing new, more refined notions of subgroup fairness. We aim to (a)
formulate different aspects of the difficulty of individuals in certain
subgroups to achieve recourse, i.e. receive the desired outcome, either at the
micro level, considering members of the subgroup individually, or at the macro
level, considering the subgroup as a whole, and (b) introduce notions of
subgroup fairness that are robust, if not totally oblivious, to the cost of
achieving recourse. We accompany these notions with an efficient,
model-agnostic, highly parameterizable, and explainable framework for
evaluating subgroup fairness. We demonstrate the advantages, the wide
applicability, and the efficiency of our approach through a thorough
experimental evaluation of different benchmark datasets.
Related papers
- FGCE: Feasible Group Counterfactual Explanations for Auditing Fairness [4.749824105387293]
This paper introduces the first graph-based framework for generating group counterfactual explanations to audit model fairness.
Our framework, named Feasible Group Counterfactual Explanations (FGCEs), captures real-world feasibility constraints and constructs subgroups with similar counterfactuals.
It also addresses key trade-offs in counterfactual generation, including the balance between the number of counterfactuals, their associated costs, and the breadth of coverage achieved.
arXiv Detail & Related papers (2024-10-29T23:10:01Z) - How does promoting the minority fraction affect generalization? A theoretical study of the one-hidden-layer neural network on group imbalance [64.1656365676171]
Group imbalance has been a known problem in empirical risk minimization.
This paper quantifies the impact of individual groups on the sample complexity, the convergence rate, and the average and group-level testing performance.
arXiv Detail & Related papers (2024-03-12T04:38:05Z) - Federated Fairness without Access to Sensitive Groups [12.888927461513472]
Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training.
We propose a new approach to guarantee group fairness that does not rely on any predefined definition of sensitive groups or additional labels.
arXiv Detail & Related papers (2024-02-22T19:24:59Z) - A structured regression approach for evaluating model performance across intersectional subgroups [53.91682617836498]
Disaggregated evaluation is a central task in AI fairness assessment, where the goal is to measure an AI system's performance across different subgroups.
We introduce a structured regression approach to disaggregated evaluation that we demonstrate can yield reliable system performance estimates even for very small subgroups.
arXiv Detail & Related papers (2024-01-26T14:21:45Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - Fair Without Leveling Down: A New Intersectional Fairness Definition [1.0958014189747356]
We propose a new definition called the $alpha$-Intersectional Fairness, which combines the absolute and the relative performance across sensitive groups.
We benchmark multiple popular in-processing fair machine learning approaches using our new fairness definition and show that they do not achieve any improvement over a simple baseline.
arXiv Detail & Related papers (2023-05-21T16:15:12Z) - Synergies between Disentanglement and Sparsity: Generalization and
Identifiability in Multi-Task Learning [79.83792914684985]
We prove a new identifiability result that provides conditions under which maximally sparse base-predictors yield disentangled representations.
Motivated by this theoretical result, we propose a practical approach to learn disentangled representations based on a sparsity-promoting bi-level optimization problem.
arXiv Detail & Related papers (2022-11-26T21:02:09Z) - On Learning Fairness and Accuracy on Multiple Subgroups [9.789933013990966]
We present a principled method for learning a fair predictor for all subgroups via formulating it as a bilevel objective.
Specifically, the subgroup specific predictors are learned in the lower-level through a small amount of data and the fair predictor.
In the upper-level, the fair predictor is updated to be close to all subgroup specific predictors.
arXiv Detail & Related papers (2022-10-19T18:59:56Z) - Focus on the Common Good: Group Distributional Robustness Follows [47.62596240492509]
This paper proposes a new and simple algorithm that explicitly encourages learning of features that are shared across various groups.
While Group-DRO focuses on groups with worst regularized loss, focusing instead, on groups that enable better performance even on other groups, could lead to learning of shared/common features.
arXiv Detail & Related papers (2021-10-06T09:47:41Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.