Preserving AUC Fairness in Learning with Noisy Protected Groups
- URL: http://arxiv.org/abs/2505.18532v1
- Date: Sat, 24 May 2025 05:50:44 GMT
- Title: Preserving AUC Fairness in Learning with Noisy Protected Groups
- Authors: Mingyang Wu, Li Lin, Wenbin Zhang, Xin Wang, Zhenhuan Yang, Shu Hu,
- Abstract summary: Area Under the ROC Curve (AUC) is a key metric for classification, especially under class imbalance.<n>We propose the first robust AUC fairness approach under noisy protected groups with fairness theoretical guarantees.<n>Our method outperforms state-of-the-art approaches in preserving AUC fairness.
- Score: 15.761922928093732
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Area Under the ROC Curve (AUC) is a key metric for classification, especially under class imbalance, with growing research focus on optimizing AUC over accuracy in applications like medical image analysis and deepfake detection. This leads to fairness in AUC optimization becoming crucial as biases can impact protected groups. While various fairness mitigation techniques exist, fairness considerations in AUC optimization remain in their early stages, with most research focusing on improving AUC fairness under the assumption of clean protected groups. However, these studies often overlook the impact of noisy protected groups, leading to fairness violations in practice. To address this, we propose the first robust AUC fairness approach under noisy protected groups with fairness theoretical guarantees using distributionally robust optimization. Extensive experiments on tabular and image datasets show that our method outperforms state-of-the-art approaches in preserving AUC fairness. The code is in https://github.com/Purdue-M2/AUC_Fairness_with_Noisy_Groups.
Related papers
- Continuous Fair SMOTE -- Fairness-Aware Stream Learning from Imbalanced Data [4.248022697109535]
We propose CFSMOTE as a fairness-aware, continuous SMOTE variant.<n>Unlike other fairness-aware stream learners, CFSMOTE is not optimizing for only one specific fairness metric.<n>Our experiments show significant improvement on several common group fairness metrics in comparison to vanilla C-SMOTE.
arXiv Detail & Related papers (2025-05-19T13:46:47Z) - DRAUC: An Instance-wise Distributionally Robust AUC Optimization
Framework [133.26230331320963]
Area Under the ROC Curve (AUC) is a widely employed metric in long-tailed classification scenarios.
We propose an instance-wise surrogate loss of Distributionally Robust AUC (DRAUC) and build our optimization framework on top of it.
arXiv Detail & Related papers (2023-11-06T12:15:57Z) - Weakly Supervised AUC Optimization: A Unified Partial AUC Approach [53.59993683627623]
We present WSAUC, a unified framework for weakly supervised AUC optimization problems.
We first frame the AUC optimization problems in various weakly supervised scenarios as a common formulation of minimizing the AUC risk on contaminated sets.
We then introduce a new type of partial AUC, specifically, the reversed partial AUC (rpAUC), which serves as a robust training objective for AUC in the presence of contaminated labels.
arXiv Detail & Related papers (2023-05-23T17:11:33Z) - Minimax AUC Fairness: Efficient Algorithm with Provable Convergence [35.045187964671335]
We propose a minimax learning and bias mitigation framework that incorporates both intra-group and inter-group AUCs while maintaining utility.
Based on this framework, we design an efficient optimization algorithm and prove its convergence to the minimum group-level AUC.
arXiv Detail & Related papers (2022-08-22T17:11:45Z) - Balanced Self-Paced Learning for AUC Maximization [88.53174245457268]
Existing self-paced methods are limited to pointwise AUC.
Our algorithm converges to a stationary point on the basis of closed-form solutions.
arXiv Detail & Related papers (2022-07-08T02:09:32Z) - Optimizing Two-way Partial AUC with an End-to-end Framework [154.47590401735323]
Area Under the ROC Curve (AUC) is a crucial metric for machine learning.
Recent work shows that the TPAUC is essentially inconsistent with the existing Partial AUC metrics.
We present the first trial in this paper to optimize this new metric.
arXiv Detail & Related papers (2022-06-23T12:21:30Z) - Fairness for AUC via Feature Augmentation [25.819342066717002]
We study fairness in the context of classification where the performance is measured by the area under the curve (AUC) of the receiver operating characteristic.
We develop a novel approach, fairAUC, based on feature augmentation (adding features) to mitigate bias between identifiable groups.
arXiv Detail & Related papers (2021-11-24T22:32:19Z) - Learning with Multiclass AUC: Theory and Algorithms [141.63211412386283]
Area under the ROC curve (AUC) is a well-known ranking metric for problems such as imbalanced learning and recommender systems.
In this paper, we start an early trial to consider the problem of learning multiclass scoring functions via optimizing multiclass AUC metrics.
arXiv Detail & Related papers (2021-07-28T05:18:10Z) - Fairness without Demographics through Adversarially Reweighted Learning [20.803276801890657]
We train an ML model to improve fairness when we do not even know the protected group memberships.
In particular, we hypothesize that non-protected features and task labels are valuable for identifying fairness issues.
Our results show that ARL improves Rawlsian Max-Min fairness, with notable AUC improvements for worst-case protected groups in multiple datasets.
arXiv Detail & Related papers (2020-06-23T16:06:52Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.