Re-weighting Based Group Fairness Regularization via Classwise Robust
Optimization
- URL: http://arxiv.org/abs/2303.00442v1
- Date: Wed, 1 Mar 2023 12:00:37 GMT
- Title: Re-weighting Based Group Fairness Regularization via Classwise Robust
Optimization
- Authors: Sangwon Jung, Taeeon Park, Sanghyuk Chun, Taesup Moon
- Abstract summary: We propose a principled method, dubbed as ours, which unifies the two learning schemes by incorporating a well-justified group fairness metric into the training objective.
We develop an iterative optimization algorithm that minimizes the resulting objective by automatically producing the correct re-weights for each group.
Our experiments show that FairDRO is scalable and easily adaptable to diverse applications.
- Score: 30.089819400033985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many existing group fairness-aware training methods aim to achieve the group
fairness by either re-weighting underrepresented groups based on certain rules
or using weakly approximated surrogates for the fairness metrics in the
objective as regularization terms. Although each of the learning schemes has
its own strength in terms of applicability or performance, respectively, it is
difficult for any method in the either category to be considered as a gold
standard since their successful performances are typically limited to specific
cases. To that end, we propose a principled method, dubbed as \ours, which
unifies the two learning schemes by incorporating a well-justified group
fairness metric into the training objective using a class wise distributionally
robust optimization (DRO) framework. We then develop an iterative optimization
algorithm that minimizes the resulting objective by automatically producing the
correct re-weights for each group. Our experiments show that FairDRO is
scalable and easily adaptable to diverse applications, and consistently
achieves the state-of-the-art performance on several benchmark datasets in
terms of the accuracy-fairness trade-off, compared to recent strong baselines.
Related papers
- Group Robust Preference Optimization in Reward-free RLHF [23.622835830345725]
We propose a novel Group Robust Preference Optimization (GRPO) method to align large language models to individual groups' preferences robustly.
To achieve this, GRPO adaptively and sequentially weights the importance of different groups, prioritizing groups with worse cumulative loss.
We significantly improved performance for the worst-performing groups, reduced loss imbalances across groups, and improved probability accuracies.
arXiv Detail & Related papers (2024-05-30T17:50:04Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - Bias Amplification Enhances Minority Group Performance [10.380812738348899]
We propose BAM, a novel two-stage training algorithm.
In the first stage, the model is trained using a bias amplification scheme via introducing a learnable auxiliary variable for each training sample.
In the second stage, we upweight the samples that the bias-amplified model misclassifies, and then continue training the same model on the reweighted dataset.
arXiv Detail & Related papers (2023-09-13T04:40:08Z) - Modeling the Q-Diversity in a Min-max Play Game for Robust Optimization [61.39201891894024]
Group distributionally robust optimization (group DRO) can minimize the worst-case loss over pre-defined groups.
We reformulate the group DRO framework by proposing Q-Diversity.
Characterized by an interactive training mode, Q-Diversity relaxes the group identification from annotation into direct parameterization.
arXiv Detail & Related papers (2023-05-20T07:02:27Z) - Unified Group Fairness on Federated Learning [22.143427873780404]
Federated learning (FL) has emerged as an important machine learning paradigm where a global model is trained based on private data from distributed clients.
Recent researches focus on achieving fairness among clients, but they ignore the fairness towards different groups formed by sensitive attribute(s) (e.g., gender and/or race)
We propose a novel FL algorithm, named Group Distributionally Robust Federated Averaging (G-DRFA), which mitigates the distribution shift across groups with theoretical analysis of convergence rate.
arXiv Detail & Related papers (2021-11-09T08:21:38Z) - Focus on the Common Good: Group Distributional Robustness Follows [47.62596240492509]
This paper proposes a new and simple algorithm that explicitly encourages learning of features that are shared across various groups.
While Group-DRO focuses on groups with worst regularized loss, focusing instead, on groups that enable better performance even on other groups, could lead to learning of shared/common features.
arXiv Detail & Related papers (2021-10-06T09:47:41Z) - Just Train Twice: Improving Group Robustness without Training Group
Information [101.84574184298006]
Standard training via empirical risk minimization can produce models that achieve high accuracy on average but low accuracy on certain groups.
Prior approaches that achieve high worst-group accuracy, like group distributionally robust optimization (group DRO) require expensive group annotations for each training point.
We propose a simple two-stage approach, JTT, that first trains a standard ERM model for several epochs, and then trains a second model that upweights the training examples that the first model misclassified.
arXiv Detail & Related papers (2021-07-19T17:52:32Z) - Individually Fair Ranking [23.95661284311917]
We develop an algorithm to train individually fair learning-to-rank models.
The proposed approach ensures items from minority groups appear alongside similar items from majority groups.
arXiv Detail & Related papers (2021-03-19T21:17:11Z) - Fairness with Overlapping Groups [15.154984899546333]
A standard goal is to ensure the equality of fairness metrics across multiple overlapping groups simultaneously.
We reconsider this standard fair classification problem using a probabilistic population analysis.
Our approach unifies a variety of existing group-fair classification methods and enables extensions to a wide range of non-decomposable multiclass performance metrics and fairness measures.
arXiv Detail & Related papers (2020-06-24T05:01:10Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.