Transparency and Proportionality in Post-Processing Algorithmic Bias Correction
- URL: http://arxiv.org/abs/2505.17525v1
- Date: Fri, 23 May 2025 06:33:31 GMT
- Title: Transparency and Proportionality in Post-Processing Algorithmic Bias Correction
- Authors: Juliett Suárez Ferreira, Marija Slavkovik, Jorge Casillas,
- Abstract summary: We focus on post-processing techniques that modify algorithmic predictions to achieve fairness in classification tasks.<n>We develop measures that quantify the disparity in the flips applied to the solution in the post-processing stage.<n>The proposed measures will help practitioners: (1) assess the proportionality of the debiasing strategy used, (2) have transparency to explain the effects of the strategy in each group, and (3) based on those results, analyze the possibility of the use of some other approaches for bias mitigation or to solve the problem.
- Score: 0.7783262415147651
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Algorithmic decision-making systems sometimes produce errors or skewed predictions toward a particular group, leading to unfair results. Debiasing practices, applied at different stages of the development of such systems, occasionally introduce new forms of unfairness or exacerbate existing inequalities. We focus on post-processing techniques that modify algorithmic predictions to achieve fairness in classification tasks, examining the unintended consequences of these interventions. To address this challenge, we develop a set of measures that quantify the disparity in the flips applied to the solution in the post-processing stage. The proposed measures will help practitioners: (1) assess the proportionality of the debiasing strategy used, (2) have transparency to explain the effects of the strategy in each group, and (3) based on those results, analyze the possibility of the use of some other approaches for bias mitigation or to solve the problem. We introduce a methodology for applying the proposed metrics during the post-processing stage and illustrate its practical application through an example. This example demonstrates how analyzing the proportionality of the debiasing strategy complements traditional fairness metrics, providing a deeper perspective to ensure fairer outcomes across all groups.
Related papers
- On the Interconnections of Calibration, Quantification, and Classifier Accuracy Prediction under Dataset Shift [58.91436551466064]
This paper investigates the interconnections among three fundamental problems, calibration, and quantification, under dataset shift conditions.<n>We show that access to an oracle for any one of these tasks enables the resolution of the other two.<n>We propose new methods for each problem based on direct adaptations of well-established methods borrowed from the other disciplines.
arXiv Detail & Related papers (2025-05-16T15:42:55Z) - Achieving Fairness in Predictive Process Analytics via Adversarial Learning [50.31323204077591]
This paper addresses the challenge of integrating a debiasing phase into predictive business process analytics.
Our framework leverages on adversial debiasing is evaluated on four case studies, showing a significant reduction in the contribution of biased variables to the predicted value.
arXiv Detail & Related papers (2024-10-03T15:56:03Z) - Predictive Inference in Multi-environment Scenarios [18.324321417099394]
We address the challenge of constructing valid confidence intervals and sets in problems of prediction across multiple environments.
We extend the jackknife and split-conformal methods to show how to obtain distribution-free coverage in non-traditional, potentially hierarchical data-generating scenarios.
Our contributions also include extensions for settings with non-real-valued responses, a theory of consistency for predictive inference in these general problems, and insights on the limits of conditional coverage.
arXiv Detail & Related papers (2024-03-25T00:21:34Z) - FRAPPE: A Group Fairness Framework for Post-Processing Everything [48.57876348370417]
We propose a framework that turns any regularized in-processing method into a post-processing approach.
We show theoretically and through experiments that our framework preserves the good fairness-error trade-offs achieved with in-processing.
arXiv Detail & Related papers (2023-12-05T09:09:21Z) - Investigating the Effects of Fairness Interventions Using Pointwise Representational Similarity [12.879768345296718]
We introduce Pointwise Normalized Kernel Alignment (PNKA), a pointwise representational similarity measure.<n>PNKA reveals previously unknown insights by measuring how debiasing measures affect the intermediate representations of individuals.<n>We show that by evaluating representations using PNKA, we can reliably predict the behavior of ML models trained on these representations.
arXiv Detail & Related papers (2023-05-30T09:40:08Z) - Auditing Fairness by Betting [43.515287900510934]
We provide practical, efficient, and nonparametric methods for auditing the fairness of deployed classification and regression models.<n>Our methods are sequential and allow for the continuous monitoring of incoming data.<n>We demonstrate the efficacy of our approach on three benchmark fairness datasets.
arXiv Detail & Related papers (2023-05-27T20:14:11Z) - Better Understanding Differences in Attribution Methods via Systematic Evaluations [57.35035463793008]
Post-hoc attribution methods have been proposed to identify image regions most influential to the models' decisions.
We propose three novel evaluation schemes to more reliably measure the faithfulness of those methods.
We use these evaluation schemes to study strengths and shortcomings of some widely used attribution methods over a wide range of models.
arXiv Detail & Related papers (2023-03-21T14:24:58Z) - When mitigating bias is unfair: multiplicity and arbitrariness in algorithmic group fairness [8.367620276482056]
We introduce the FRAME (FaiRness Arbitrariness and Multiplicity Evaluation) framework, which evaluates bias mitigation through five dimensions.
Applying FRAME to various bias mitigation approaches across key datasets allows us to exhibit significant differences in the behaviors of debiasing methods.
These findings highlight the limitations of current fairness criteria and the inherent arbitrariness in the debiasing process.
arXiv Detail & Related papers (2023-02-14T16:53:52Z) - Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness [8.958956425857878]
We argue that machine learning risks reinforcing biases present in data and in what is absent from data.<n>The way we address missingness in healthcare can have detrimental impacts on algorithmic fairness.<n>We propose a framework for empirically guiding imputation choices, and an accompanying reporting framework.
arXiv Detail & Related papers (2022-08-13T13:34:05Z) - Towards Better Understanding Attribution Methods [77.1487219861185]
Post-hoc attribution methods have been proposed to identify image regions most influential to the models' decisions.
We propose three novel evaluation schemes to more reliably measure the faithfulness of those methods.
We also propose a post-processing smoothing step that significantly improves the performance of some attribution methods.
arXiv Detail & Related papers (2022-05-20T20:50:17Z) - A One-step Approach to Covariate Shift Adaptation [82.01909503235385]
A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution.
We propose a novel one-step approach that jointly learns the predictive model and the associated weights in one optimization.
arXiv Detail & Related papers (2020-07-08T11:35:47Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.