Fairness for AUC via Feature Augmentation
- URL: http://arxiv.org/abs/2111.12823v1
- Date: Wed, 24 Nov 2021 22:32:19 GMT
- Title: Fairness for AUC via Feature Augmentation
- Authors: Hortense Fong and Vineet Kumar and Anay Mehrotra and Nisheeth K.
Vishnoi
- Abstract summary: We study fairness in the context of classification where the performance is measured by the area under the curve (AUC) of the receiver operating characteristic.
We develop a novel approach, fairAUC, based on feature augmentation (adding features) to mitigate bias between identifiable groups.
- Score: 25.819342066717002
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study fairness in the context of classification where the performance is
measured by the area under the curve (AUC) of the receiver operating
characteristic. AUC is commonly used when both Type I (false positive) and Type
II (false negative) errors are important. However, the same classifier can have
significantly varying AUCs for different protected groups and, in real-world
applications, it is often desirable to reduce such cross-group differences. We
address the problem of how to select additional features to most greatly
improve AUC for the disadvantaged group. Our results establish that the
unconditional variance of features does not inform us about AUC fairness but
class-conditional variance does. Using this connection, we develop a novel
approach, fairAUC, based on feature augmentation (adding features) to mitigate
bias between identifiable groups. We evaluate fairAUC on synthetic and
real-world (COMPAS) datasets and find that it significantly improves AUC for
the disadvantaged group relative to benchmarks maximizing overall AUC and
minimizing bias between groups.
Related papers
- Fairness Hub Technical Briefs: AUC Gap [0.6827423171182154]
To measure bias, we encourage teams to consider using AUC Gap: the absolute difference between the highest and lowest test AUC for subgroups.
It is agnostic to the AI/ML algorithm used and it captures the disparity in model performance for any number of subgroups.
The teams use a wide range of AI/ML models in pursuit of a common goal of doubling math achievement in low-income middle schools.
arXiv Detail & Related papers (2023-09-20T19:53:04Z) - Balanced Classification: A Unified Framework for Long-Tailed Object
Detection [74.94216414011326]
Conventional detectors suffer from performance degradation when dealing with long-tailed data due to a classification bias towards the majority head categories.
We introduce a unified framework called BAlanced CLassification (BACL), which enables adaptive rectification of inequalities caused by disparities in category distribution.
BACL consistently achieves performance improvements across various datasets with different backbones and architectures.
arXiv Detail & Related papers (2023-08-04T09:11:07Z) - Weakly Supervised AUC Optimization: A Unified Partial AUC Approach [53.59993683627623]
We present WSAUC, a unified framework for weakly supervised AUC optimization problems.
We first frame the AUC optimization problems in various weakly supervised scenarios as a common formulation of minimizing the AUC risk on contaminated sets.
We then introduce a new type of partial AUC, specifically, the reversed partial AUC (rpAUC), which serves as a robust training objective for AUC in the presence of contaminated labels.
arXiv Detail & Related papers (2023-05-23T17:11:33Z) - Enhancing Personalized Ranking With Differentiable Group AUC
Optimization [10.192514219354651]
We propose the PDAOM loss, a personalized and differentiable AUC Optimization method with Maximum violation.
The proposed PDAOM loss not only improves the AUC and GAUC metrics in the offline evaluation, but also reduces the complexity of the training objective.
Online evaluation of the PDAOM loss on the 'Guess What You Like' feed recommendation application in Meituan manifests 1.40% increase in click count and 0.65% increase in order count.
arXiv Detail & Related papers (2023-04-17T09:39:40Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - Minimax AUC Fairness: Efficient Algorithm with Provable Convergence [35.045187964671335]
We propose a minimax learning and bias mitigation framework that incorporates both intra-group and inter-group AUCs while maintaining utility.
Based on this framework, we design an efficient optimization algorithm and prove its convergence to the minimum group-level AUC.
arXiv Detail & Related papers (2022-08-22T17:11:45Z) - Attributing AUC-ROC to Analyze Binary Classifier Performance [13.192005156790302]
We discuss techniques to segment the Area Under the Receiver Operating Characteristic Curve (AUC-ROC) along human-interpretable dimensions.
AUC-ROC is not an additive/linear function over the data samples, therefore such segmenting the overall AUC-ROC is different from tabulating the AUC-ROC of data segments.
arXiv Detail & Related papers (2022-05-24T04:42:52Z) - Learning with Multiclass AUC: Theory and Algorithms [141.63211412386283]
Area under the ROC curve (AUC) is a well-known ranking metric for problems such as imbalanced learning and recommender systems.
In this paper, we start an early trial to consider the problem of learning multiclass scoring functions via optimizing multiclass AUC metrics.
arXiv Detail & Related papers (2021-07-28T05:18:10Z) - Feature Selection for Imbalanced Data with Deep Sparse Autoencoders
Ensemble [0.5352699766206808]
Class imbalance is a common issue in many domain applications of learning algorithms.
We propose a filtering FS algorithm ranking feature importance on the basis of the Reconstruction Error of a Deep Sparse AutoEncoders Ensemble.
We empirically demonstrate the efficacy of our algorithm in several experiments on high-dimensional datasets of varying sample size.
arXiv Detail & Related papers (2021-03-22T09:17:08Z) - Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation [61.317911756566126]
We propose a Towards Fair Knowledge Transfer framework to handle the fairness challenge in imbalanced cross-domain learning.
Specifically, a novel cross-domain mixup generation is exploited to augment the minority source set with target information to enhance fairness.
Our model significantly improves over 20% on two benchmarks in terms of the overall accuracy.
arXiv Detail & Related papers (2020-10-23T06:29:09Z) - Unsupervised Feature Learning by Cross-Level Instance-Group
Discrimination [68.83098015578874]
We integrate between-instance similarity into contrastive learning, not directly by instance grouping, but by cross-level discrimination.
CLD effectively brings unsupervised learning closer to natural data and real-world applications.
New state-of-the-art on self-supervision, semi-supervision, and transfer learning benchmarks, and beats MoCo v2 and SimCLR on every reported performance.
arXiv Detail & Related papers (2020-08-09T21:13:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.