FairBatch: Batch Selection for Model Fairness
- URL: http://arxiv.org/abs/2012.01696v1
- Date: Thu, 3 Dec 2020 04:36:04 GMT
- Title: FairBatch: Batch Selection for Model Fairness
- Authors: Yuji Roh, Kangwook Lee, Steven Euijong Whang, Changho Suh
- Abstract summary: Existing techniques for improving model fairness require broad changes in either data preprocessing or model training.
We address this problem via the lens of bilevel optimization.
Our batch selection algorithm, which we call FairBatch, implements this optimization and supports prominent fairness measures.
- Score: 28.94276265328868
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training a fair machine learning model is essential to prevent demographic
disparity. Existing techniques for improving model fairness require broad
changes in either data preprocessing or model training, rendering themselves
difficult-to-adopt for potentially already complex machine learning systems. We
address this problem via the lens of bilevel optimization. While keeping the
standard training algorithm as an inner optimizer, we incorporate an outer
optimizer so as to equip the inner problem with an additional functionality:
Adaptively selecting minibatch sizes for the purpose of improving model
fairness. Our batch selection algorithm, which we call FairBatch, implements
this optimization and supports prominent fairness measures: equal opportunity,
equalized odds, and demographic parity. FairBatch comes with a significant
implementation benefit -- it does not require any modification to data
preprocessing or model training. For instance, a single-line change of PyTorch
code for replacing batch selection part of model training suffices to employ
FairBatch. Our experiments conducted both on synthetic and benchmark real data
demonstrate that FairBatch can provide such functionalities while achieving
comparable (or even greater) performances against the state of the arts.
Furthermore, FairBatch can readily improve fairness of any pre-trained model
simply via fine-tuning. It is also compatible with existing batch selection
techniques intended for different purposes, such as faster convergence, thus
gracefully achieving multiple purposes.
Related papers
- Data Mixing Laws: Optimizing Data Mixtures by Predicting Language Modeling Performance [55.872926690722714]
We study the predictability of model performance regarding the mixture proportions in function forms.
We propose nested use of the scaling laws of training steps, model sizes, and our data mixing law.
Our method effectively optimize the training mixture of a 1B model trained for 100B tokens in RedPajama.
arXiv Detail & Related papers (2024-03-25T17:14:00Z) - FairTune: Optimizing Parameter Efficient Fine Tuning for Fairness in
Medical Image Analysis [15.166588667072888]
Training models with robust group fairness properties is crucial in ethically sensitive application areas such as medical diagnosis.
High-capacity deep learning models can fit all training data nearly perfectly, and thus also exhibit perfect fairness during training.
We propose FairTune, a framework to optimise the choice of PEFT parameters with respect to fairness.
arXiv Detail & Related papers (2023-10-08T07:41:15Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - BiFair: Training Fair Models with Bilevel Optimization [8.2509884277533]
We develop a new training algorithm, named BiFair, which jointly minimizes for a utility, and a fairness loss of interest.
Our algorithm consistently performs better, i.e., we reach to better values of a given fairness metric under same, or higher accuracy.
arXiv Detail & Related papers (2021-06-03T22:36:17Z) - Augmented Fairness: An Interpretable Model Augmenting Decision-Makers'
Fairness [10.53972370889201]
We propose a model-agnostic approach for mitigating the prediction bias of a black-box decision-maker.
Our method detects in the feature space where the black-box decision-maker is biased and replaces it with a few short decision rules, acting as a "fair surrogate"
arXiv Detail & Related papers (2020-11-17T03:25:44Z) - SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness [50.916483212900275]
We first formulate a version of individual fairness that enforces invariance on certain sensitive sets.
We then design a transport-based regularizer that enforces this version of individual fairness and develop an algorithm to minimize the regularizer efficiently.
arXiv Detail & Related papers (2020-06-25T04:31:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.