To the Fairness Frontier and Beyond: Identifying, Quantifying, and
Optimizing the Fairness-Accuracy Pareto Frontier
- URL: http://arxiv.org/abs/2206.00074v1
- Date: Tue, 31 May 2022 19:35:53 GMT
- Title: To the Fairness Frontier and Beyond: Identifying, Quantifying, and
Optimizing the Fairness-Accuracy Pareto Frontier
- Authors: Camille Olivia Little and Michael Weylandt and Genevera I Allen
- Abstract summary: Algorithmic fairness has emerged as an important consideration when using machine learning to make high-stakes societal decisions.
Yet, improved fairness often comes at the expense of model accuracy.
We seek to identify, quantify, and optimize the empirical Pareto frontier of the fairness-accuracy tradeoff.
- Score: 1.5293427903448022
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Algorithmic fairness has emerged as an important consideration when using
machine learning to make high-stakes societal decisions. Yet, improved fairness
often comes at the expense of model accuracy. While aspects of the
fairness-accuracy tradeoff have been studied, most work reports the fairness
and accuracy of various models separately; this makes model comparisons nearly
impossible without a model-agnostic metric that reflects the balance of the two
desiderata. We seek to identify, quantify, and optimize the empirical Pareto
frontier of the fairness-accuracy tradeoff. Specifically, we identify and
outline the empirical Pareto frontier through
Tradeoff-between-Fairness-and-Accuracy (TAF) Curves; we then develop a metric
to quantify this Pareto frontier through the weighted area under the TAF Curve
which we term the Fairness-Area-Under-the-Curve (FAUC). TAF Curves provide the
first empirical, model-agnostic characterization of the Pareto frontier, while
FAUC provides the first metric to impartially compare model families on both
fairness and accuracy. Both TAF Curves and FAUC can be employed with all group
fairness definitions and accuracy measures. Next, we ask: Is it possible to
expand the empirical Pareto frontier and thus improve the FAUC for a given
collection of fitted models? We answer affirmately by developing a novel fair
model stacking framework, FairStacks, that solves a convex program to maximize
the accuracy of model ensemble subject to a score-bias constraint. We show that
optimizing with FairStacks always expands the empirical Pareto frontier and
improves the FAUC; we additionally study other theoretical properties of our
proposed approach. Finally, we empirically validate TAF, FAUC, and FairStacks
through studies on several real benchmark data sets, showing that FairStacks
leads to major improvements in FAUC that outperform existing algorithmic
fairness approaches.
Related papers
- Fairness-Aware Meta-Learning via Nash Bargaining [63.44846095241147]
We introduce a two-stage meta-learning framework to address issues of group-level fairness in machine learning.
The first stage involves the use of a Nash Bargaining Solution (NBS) to resolve hypergradient conflicts and steer the model.
We show empirical effects across various fairness objectives in six key fairness datasets and two image classification tasks.
arXiv Detail & Related papers (2024-06-11T07:34:15Z) - A Theoretical Approach to Characterize the Accuracy-Fairness Trade-off
Pareto Frontier [42.18013955576355]
The accuracy-fairness trade-off has been frequently observed in the literature of fair machine learning.
This work seeks to develop a theoretical framework by characterizing the shape of the accuracy-fairness trade-off.
The proposed research enables an in-depth understanding of the accuracy-fairness trade-off, pushing current fair machine-learning research to a new frontier.
arXiv Detail & Related papers (2023-10-19T14:35:26Z) - Learning Fair Classifiers via Min-Max F-divergence Regularization [13.81078324883519]
We introduce a novel min-max F-divergence regularization framework for learning fair classification models.
We show that F-divergence measures possess convexity and differentiability properties.
We show that the proposed framework achieves state-of-the-art performance with respect to the trade-off between accuracy and fairness.
arXiv Detail & Related papers (2023-06-28T20:42:04Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - Fairly Accurate: Learning Optimal Accuracy vs. Fairness Tradeoffs for
Hate Speech Detection [8.841221697099687]
We introduce a differentiable measure that enables direct optimization of group fairness in model training.
We evaluate our methods on the specific task of hate speech detection.
Empirical results across convolutional, sequential, and transformer-based neural architectures show superior empirical accuracy vs. fairness trade-offs over prior work.
arXiv Detail & Related papers (2022-04-15T22:11:25Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Pareto Efficient Fairness in Supervised Learning: From Extraction to
Tracing [26.704236797908177]
algorithmic decision-making systems are becoming more pervasive.
Due to the inherent trade-off between measures and accuracy, it is desirable to ensure the trade-off between overall loss and other criteria.
We propose a definition-agnostic, meaning that any well-defined notion of can be reduced to the PEF notion.
arXiv Detail & Related papers (2021-04-04T15:49:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.