Fair Recourse for All: Ensuring Individual and Group Fairness in Counterfactual Explanations
- URL: http://arxiv.org/abs/2601.20449v1
- Date: Wed, 28 Jan 2026 10:13:12 GMT
- Title: Fair Recourse for All: Ensuring Individual and Group Fairness in Counterfactual Explanations
- Authors: Fatima Ezzeddine, Obaida Ammar, Silvia Giordano, Omran Ayoub,
- Abstract summary: We propose a novel model-agnostic, reinforcement learning based approach to generate CFs that satisfy fairness constraints at both the individual and group levels.<n>We evaluate our approach on three benchmark datasets, showing that it effectively ensures individual and group fairness.
- Score: 0.5249805590164901
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable Artificial Intelligence (XAI) is becoming increasingly essential for enhancing the transparency of machine learning (ML) models. Among the various XAI techniques, counterfactual explanations (CFs) hold a pivotal role due to their ability to illustrate how changes in input features can alter an ML model's decision, thereby offering actionable recourse to users. Ensuring that individuals with comparable attributes and those belonging to different protected groups (e.g., demographic) receive similar and actionable recourse options is essential for trustworthy and fair decision-making. In this work, we address this challenge directly by focusing on the generation of fair CFs. Specifically, we start by defining and formulating fairness at: 1) individual fairness, ensuring that similar individuals receive similar CFs, 2) group fairness, ensuring equitable CFs across different protected groups and 3) hybrid fairness, which accounts for both individual and broader group-level fairness. We formulate the problem as an optimization task and propose a novel model-agnostic, reinforcement learning based approach to generate CFs that satisfy fairness constraints at both the individual and group levels, two objectives that are usually treated as orthogonal. As fairness metrics, we extend existing metrics commonly used for auditing ML models, such as equal choice of recourse and equal effectiveness across individuals and groups. We evaluate our approach on three benchmark datasets, showing that it effectively ensures individual and group fairness while preserving the quality of the generated CFs in terms of proximity and plausibility, and quantify the cost of fairness in the different levels separately. Our work opens a broader discussion on hybrid fairness and its role and implications for XAI and beyond CFs.
Related papers
- Equalized Generative Treatment: Matching f-divergences for Fairness in Generative Models [49.094293060279675]
We introduce a new fairness definition for generative models, termed as equalized generative treatment (EGT)<n>EGT requires comparable generation quality across all sensitive groups, with quality measured via a reference f-divergence.<n>We show that min-max methods consistently achieve fairer outcomes compared to other approaches from the literature.
arXiv Detail & Related papers (2026-02-09T13:52:36Z) - Individual Fairness In Strategic Classification [7.903744240517792]
Strategic classification, where individuals modify their features to influence machine learning (ML) decisions, presents critical fairness challenges.<n>We analyze threshold-based classifiers and prove that deterministic thresholds violate individual fairness.<n>We introduce conditions under which a randomized classifier ensures individual fairness and leverage these conditions to find an optimal and individually fair randomized classifier.<n> Experiments on real-world datasets confirm that our method effectively mitigates unfairness and improves the fairness-accuracy trade-off.
arXiv Detail & Related papers (2026-02-04T22:09:18Z) - FedFACT: A Provable Framework for Controllable Group-Fairness Calibration in Federated Learning [23.38141950440522]
We propose a controllable federated group-fairness calibration framework, named FedFACT.<n>FedFACT identifies the Bayes-optimal classifiers under both global and local fairness constraints.<n>We show that FedFACT consistently outperforms baselines in balancing accuracy and global-local fairness.
arXiv Detail & Related papers (2025-06-04T09:39:57Z) - Mitigating Group-Level Fairness Disparities in Federated Visual Language Models [115.16940773660104]
This paper introduces FVL-FP, a novel framework that combines FL with fair prompt tuning techniques.<n>We focus on mitigating demographic biases while preserving model performance.<n>Our approach reduces demographic disparity by an average of 45% compared to standard FL approaches.
arXiv Detail & Related papers (2025-05-03T16:09:52Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Learning Fair Classifiers via Min-Max F-divergence Regularization [13.81078324883519]
We introduce a novel min-max F-divergence regularization framework for learning fair classification models.
We show that F-divergence measures possess convexity and differentiability properties.
We show that the proposed framework achieves state-of-the-art performance with respect to the trade-off between accuracy and fairness.
arXiv Detail & Related papers (2023-06-28T20:42:04Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z) - FACT: A Diagnostic for Group Fairness Trade-offs [23.358566041117083]
Group fairness is a class of fairness notions that measure how different groups of individuals are treated differently according to their protected attributes.
We propose a general diagnostic that enables systematic characterization of these trade-offs in group fairness.
arXiv Detail & Related papers (2020-04-07T14:15:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.