FairPOT: Balancing AUC Performance and Fairness with Proportional Optimal Transport
- URL: http://arxiv.org/abs/2508.03940v1
- Date: Tue, 05 Aug 2025 22:13:08 GMT
- Title: FairPOT: Balancing AUC Performance and Fairness with Proportional Optimal Transport
- Authors: Pengxi Liu, Yi Shen, Matthew M. Engelhard, Benjamin A. Goldstein, Michael J. Pencina, Nicoleta J. Economou-Zavlanos, Michael M. Zavlanos,
- Abstract summary: In high-stakes domains such as healthcare, fairness is often evaluated over risk scores rather than binary outcomes.<n>We propose Fair Proportional Optimal Transport (FairPOT), a novel, model-agnostic post-processing framework.<n>We show that FairPOT consistently outperforms existing post-processing techniques in both global and partial AUC scenarios.
- Score: 9.47500733579604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness metrics utilizing the area under the receiver operator characteristic curve (AUC) have gained increasing attention in high-stakes domains such as healthcare, finance, and criminal justice. In these domains, fairness is often evaluated over risk scores rather than binary outcomes, and a common challenge is that enforcing strict fairness can significantly degrade AUC performance. To address this challenge, we propose Fair Proportional Optimal Transport (FairPOT), a novel, model-agnostic post-processing framework that strategically aligns risk score distributions across different groups using optimal transport, but does so selectively by transforming a controllable proportion, i.e., the top-lambda quantile, of scores within the disadvantaged group. By varying lambda, our method allows for a tunable trade-off between reducing AUC disparities and maintaining overall AUC performance. Furthermore, we extend FairPOT to the partial AUC setting, enabling fairness interventions to concentrate on the highest-risk regions. Extensive experiments on synthetic, public, and clinical datasets show that FairPOT consistently outperforms existing post-processing techniques in both global and partial AUC scenarios, often achieving improved fairness with slight AUC degradation or even positive gains in utility. The computational efficiency and practical adaptability of FairPOT make it a promising solution for real-world deployment.
Related papers
- Optimal Transport Learning: Balancing Value Optimization and Fairness in Individualized Treatment Rules [7.231498702561808]
We propose a flexible approach based on the optimal transport theory to transform any optimal ITR into a fair ITR.<n>We demonstrate performance of the proposed method through extensive simulation studies and application to the Next 36 entrepreneurial program dataset.
arXiv Detail & Related papers (2025-07-31T08:56:03Z) - FedFACT: A Provable Framework for Controllable Group-Fairness Calibration in Federated Learning [13.575259448363557]
We propose a controllable group-fairness calibration framework, named FedFACT.<n>FedFACT identifies the Bayes-optimal classifiers under both global and local fairness constraints.<n>Experiments on multiple datasets demonstrate that FedFACT consistently outperforms baselines in balancing accuracy and global-local fairness.
arXiv Detail & Related papers (2025-06-04T09:39:57Z) - pFedFair: Towards Optimal Group Fairness-Accuracy Trade-off in Heterogeneous Federated Learning [17.879602968559198]
Federated learning algorithms aim to maximize clients' accuracy by training a model on their collective data.<n>Group fairness constraints can be incorporated into the objective function of the FL optimization problem.<n>We show that such an approach would lead to suboptimal classification accuracy in an FL setting with heterogeneous client distributions.
arXiv Detail & Related papers (2025-03-19T06:15:31Z) - Over-the-Air Fair Federated Learning via Multi-Objective Optimization [52.295563400314094]
We propose an over-the-air fair federated learning algorithm (OTA-FFL) to train fair FL models.<n>Experiments demonstrate the superiority of OTA-FFL in achieving fairness and robust performance.
arXiv Detail & Related papers (2025-01-06T21:16:51Z) - Fairness-aware Federated Minimax Optimization with Convergence Guarantee [10.727328530242461]
Federated learning (FL) has garnered considerable attention due to its privacy-preserving feature.
The lack of freedom in managing user data can lead to group fairness issues, where models are biased towards sensitive factors such as race or gender.
This paper proposes a novel algorithm, fair federated averaging with augmented Lagrangian method (FFALM), designed explicitly to address group fairness issues in FL.
arXiv Detail & Related papers (2023-07-10T08:45:58Z) - Mimicking Better by Matching the Approximate Action Distribution [48.95048003354255]
We introduce MAAD, a novel, sample-efficient on-policy algorithm for Imitation Learning from Observations.
We show that it requires considerable fewer interactions to achieve expert performance, outperforming current state-of-the-art on-policy methods.
arXiv Detail & Related papers (2023-06-16T12:43:47Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - Fair and Optimal Classification via Post-Processing [10.163721748735801]
This paper provides a complete characterization of the inherent tradeoff of demographic parity on classification problems.
We show that the minimum error rate achievable by randomized and attribute-aware fair classifiers is given by the optimal value of a Wasserstein-barycenter problem.
arXiv Detail & Related papers (2022-11-03T00:04:04Z) - Optimizing Two-way Partial AUC with an End-to-end Framework [154.47590401735323]
Area Under the ROC Curve (AUC) is a crucial metric for machine learning.
Recent work shows that the TPAUC is essentially inconsistent with the existing Partial AUC metrics.
We present the first trial in this paper to optimize this new metric.
arXiv Detail & Related papers (2022-06-23T12:21:30Z) - Fairness for AUC via Feature Augmentation [25.819342066717002]
We study fairness in the context of classification where the performance is measured by the area under the curve (AUC) of the receiver operating characteristic.
We develop a novel approach, fairAUC, based on feature augmentation (adding features) to mitigate bias between identifiable groups.
arXiv Detail & Related papers (2021-11-24T22:32:19Z) - Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation [61.317911756566126]
We propose a Towards Fair Knowledge Transfer framework to handle the fairness challenge in imbalanced cross-domain learning.
Specifically, a novel cross-domain mixup generation is exploited to augment the minority source set with target information to enhance fairness.
Our model significantly improves over 20% on two benchmarks in terms of the overall accuracy.
arXiv Detail & Related papers (2020-10-23T06:29:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.