M$^2$FGB: A Min-Max Gradient Boosting Framework for Subgroup Fairness
- URL: http://arxiv.org/abs/2504.12458v1
- Date: Wed, 16 Apr 2025 19:47:53 GMT
- Title: M$^2$FGB: A Min-Max Gradient Boosting Framework for Subgroup Fairness
- Authors: Jansen S. B. Pereira, Giovani Valdrighi, Marcos Medeiros Raimundo,
- Abstract summary: We consider applying subgroup justice concepts to gradient-boosting machines designed for supervised learning problems.<n>We study relevant theoretical properties of the solution of the min-max optimization problem.<n>The proposed min-max primal-dual gradient boosting algorithm was theoretically shown to converge under mild conditions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, fairness in machine learning has emerged as a critical concern to ensure that developed and deployed predictive models do not have disadvantageous predictions for marginalized groups. It is essential to mitigate discrimination against individuals based on protected attributes such as gender and race. In this work, we consider applying subgroup justice concepts to gradient-boosting machines designed for supervised learning problems. Our approach expanded gradient-boosting methodologies to explore a broader range of objective functions, which combines conventional losses such as the ones from classification and regression and a min-max fairness term. We study relevant theoretical properties of the solution of the min-max optimization problem. The optimization process explored the primal-dual problems at each boosting round. This generic framework can be adapted to diverse fairness concepts. The proposed min-max primal-dual gradient boosting algorithm was theoretically shown to converge under mild conditions and empirically shown to be a powerful and flexible approach to address binary and subgroup fairness.
Related papers
- Bridging Jensen Gap for Max-Min Group Fairness Optimization in Recommendation [63.66719748453878]
Group max-min fairness (MMF) is commonly used in fairness-aware recommender systems (RS) as an optimization objective.<n>We present an efficient and effective algorithm named FairDual, which utilizes a dual optimization technique to minimize the Jensen gap.<n>Our theoretical analysis demonstrates that FairDual can achieve a sub-linear convergence rate to the globally optimal solution.
arXiv Detail & Related papers (2025-02-13T13:33:45Z) - Preference-Based Multi-Agent Reinforcement Learning: Data Coverage and Algorithmic Techniques [65.55451717632317]
We study Preference-Based Multi-Agent Reinforcement Learning (PbMARL)
We identify the Nash equilibrium from a preference-only offline dataset in general-sum games.
Our findings underscore the multifaceted approach required for PbMARL.
arXiv Detail & Related papers (2024-09-01T13:14:41Z) - Loss Balancing for Fair Supervised Learning [20.13250413610897]
Supervised learning models have been used in various domains such as lending, college admission, face recognition, natural language processing, etc.
Various notions have been proposed to address the unfairness predictor on the learning process (EL)
arXiv Detail & Related papers (2023-11-07T04:36:13Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Proportionally Representative Clustering [17.5359577544947]
We propose a new axiom proportionally representative fairness'' (PRF) that is designed for clustering problems.
Our fairness concept is not satisfied by existing fair clustering algorithms.
Our algorithm for the unconstrained setting is also the first known-time approximation algorithm for the well-studied Proportional Fairness (PF) axiom.
arXiv Detail & Related papers (2023-04-27T02:01:24Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - Learning Towards the Largest Margins [83.7763875464011]
Loss function should promote the largest possible margins for both classes and samples.
Not only does this principled framework offer new perspectives to understand and interpret existing margin-based losses, but it can guide the design of new tools.
arXiv Detail & Related papers (2022-06-23T10:03:03Z) - Bayes-Optimal Classifiers under Group Fairness [32.52143951145071]
This paper provides a unified framework for deriving Bayes-optimal classifiers under group fairness.
We propose a group-based thresholding method we call FairBayes, that can directly control disparity and achieve an essentially optimal fairness-accuracy tradeoff.
arXiv Detail & Related papers (2022-02-20T03:35:44Z) - All of the Fairness for Edge Prediction with Optimal Transport [11.51786288978429]
We study the problem of fairness for the task of edge prediction in graphs.
We propose an embedding-agnostic repairing procedure for the adjacency matrix of an arbitrary graph with a trade-off between the group and individual fairness.
arXiv Detail & Related papers (2020-10-30T15:33:13Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.