OmniFair: A Declarative System for Model-Agnostic Group Fairness in
Machine Learning
- URL: http://arxiv.org/abs/2103.09055v1
- Date: Sat, 13 Mar 2021 02:44:10 GMT
- Title: OmniFair: A Declarative System for Model-Agnostic Group Fairness in
Machine Learning
- Authors: Hantian Zhang, Xu Chu, Abolfazl Asudeh, Shamkant B. Navathe
- Abstract summary: We propose a declarative system OmniFair for supporting group fairness in machine learning (ML)
OmniFair features a declarative interface for users to specify desired group fairness constraints.
We show that OmniFair is more versatile than existing algorithmic fairness approaches in terms of both supported fairness constraints and downstream ML models.
- Score: 11.762484210143773
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning (ML) is increasingly being used to make decisions in our
society. ML models, however, can be unfair to certain demographic groups (e.g.,
African Americans or females) according to various fairness metrics. Existing
techniques for producing fair ML models either are limited to the type of
fairness constraints they can handle (e.g., preprocessing) or require
nontrivial modifications to downstream ML training algorithms (e.g.,
in-processing).
We propose a declarative system OmniFair for supporting group fairness in ML.
OmniFair features a declarative interface for users to specify desired group
fairness constraints and supports all commonly used group fairness notions,
including statistical parity, equalized odds, and predictive parity. OmniFair
is also model-agnostic in the sense that it does not require modifications to a
chosen ML algorithm. OmniFair also supports enforcing multiple user declared
fairness constraints simultaneously while most previous techniques cannot. The
algorithms in OmniFair maximize model accuracy while meeting the specified
fairness constraints, and their efficiency is optimized based on the
theoretically provable monotonicity property regarding the trade-off between
accuracy and fairness that is unique to our system.
We conduct experiments on commonly used datasets that exhibit bias against
minority groups in the fairness literature. We show that OmniFair is more
versatile than existing algorithmic fairness approaches in terms of both
supported fairness constraints and downstream ML models. OmniFair reduces the
accuracy loss by up to $94.8\%$ compared with the second best method. OmniFair
also achieves similar running time to preprocessing methods, and is up to
$270\times$ faster than in-processing methods.
Related papers
- You Only Debias Once: Towards Flexible Accuracy-Fairness Trade-offs at Inference Time [131.96508834627832]
Deep neural networks are prone to various bias issues, jeopardizing their applications for high-stake decision-making.
We propose You Only Debias Once (YODO) to achieve in-situ flexible accuracy-fairness trade-offs at inference time.
YODO achieves flexible trade-offs between model accuracy and fairness, at ultra-low overheads.
arXiv Detail & Related papers (2025-03-10T08:50:55Z) - Fairness And Performance In Harmony: Data Debiasing Is All You Need [5.969005147375361]
This study investigates fairness using a real-world university admission dataset with 870 profiles.
For individual fairness, we assess decision consistency among experts with varied backgrounds and ML models.
Results show ML models outperform humans in fairness by 14.08% to 18.79%.
arXiv Detail & Related papers (2024-11-26T12:31:10Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fairness Reprogramming [42.65700878967251]
We propose a new generic fairness learning paradigm, called FairReprogram, which incorporates the model reprogramming technique.
Specifically, FairReprogram considers the case where models can not be changed and appends to the input a set of perturbations, called the fairness trigger.
We show both theoretically and empirically that the fairness trigger can effectively obscure demographic biases in the output prediction of fixed ML models.
arXiv Detail & Related papers (2022-09-21T09:37:00Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Domain Adaptation meets Individual Fairness. And they get along [48.95808607591299]
We show that algorithmic fairness interventions can help machine learning models overcome distribution shifts.
In particular, we show that enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models.
arXiv Detail & Related papers (2022-05-01T16:19:55Z) - Promoting Fairness through Hyperparameter Optimization [4.479834103607383]
This work explores, in the context of a real-world fraud detection application, the unfairness that emerges from traditional ML model development.
We propose and evaluate fairness-aware variants of three popular HO algorithms: Fair Random Search, Fair TPE, and Fairband.
We validate our approach on a real-world bank account opening fraud use case, as well as on three datasets from the fairness literature.
arXiv Detail & Related papers (2021-03-23T17:36:22Z) - Metrics and methods for a systematic comparison of fairness-aware
machine learning algorithms [0.0]
This study is the most comprehensive of its kind.
It considers fairness, predictive-performance, calibration quality, and speed of 28 different modelling pipelines.
We also found that fairness-aware algorithms can induce fairness without material drops in predictive power.
arXiv Detail & Related papers (2020-10-08T13:58:09Z) - SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness [50.916483212900275]
We first formulate a version of individual fairness that enforces invariance on certain sensitive sets.
We then design a transport-based regularizer that enforces this version of individual fairness and develop an algorithm to minimize the regularizer efficiently.
arXiv Detail & Related papers (2020-06-25T04:31:57Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z) - Fair Bayesian Optimization [25.80374249896801]
We introduce a general constrained Bayesian optimization framework to optimize the performance of any machine learning (ML) model.
We apply BO with fairness constraints to a range of popular models, including random forests, boosting, and neural networks.
We show that our approach is competitive with specialized techniques that enforce model-specific fairness constraints.
arXiv Detail & Related papers (2020-06-09T08:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.