FairRF: Multi-Objective Search for Single and Intersectional Software Fairness
- URL: http://arxiv.org/abs/2601.07537v1
- Date: Mon, 12 Jan 2026 13:42:45 GMT
- Title: FairRF: Multi-Objective Search for Single and Intersectional Software Fairness
- Authors: Giordano d'Alosio, Max Hort, Rebecca Moussa, Federica Sarro,
- Abstract summary: We introduce FairRF, a novel approach based on multi-objective evolutionary search to optimise fairness and effectiveness in classification tasks.<n>We conduct an extensive empirical evaluation of FairRF against 26 different baselines in 11 different scenarios using five effectiveness and three fairness metrics.
- Score: 6.155605380087007
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background: The wide adoption of AI- and ML-based systems in sensitive domains raises severe concerns about their fairness. Many methods have been proposed in the literature to enhance software fairness. However, the majority behave as a black-box, not allowing stakeholders to prioritise fairness or effectiveness (i.e., prediction correctness) based on their needs. Aims: In this paper, we introduce FairRF, a novel approach based on multi-objective evolutionary search to optimise fairness and effectiveness in classification tasks. FairRF uses a Random Forest (RF) model as a base classifier and searches for the best hyperparameter configurations and data mutation to maximise fairness and effectiveness. Eventually, it returns a set of Pareto optimal solutions, allowing the final stakeholders to choose the best one based on their needs. Method: We conduct an extensive empirical evaluation of FairRF against 26 different baselines in 11 different scenarios using five effectiveness and three fairness metrics. Additionally, we also include two variations of the fairness metrics for intersectional bias for a total of six definitions analysed. Result: Our results show that FairRF can significantly improve the fairness of base classifiers, while maintaining consistent prediction effectiveness. Additionally, FairRF provides a more consistent optimisation under all fairness definitions compared to state-of-the-art bias mitigation methods and overcomes the existing state-of-the-art approach for intersectional bias mitigation. Conclusions: FairRF is an effective approach for bias mitigation also allowing stakeholders to adapt the development of fair software systems based on their specific needs.
Related papers
- APFEx: Adaptive Pareto Front Explorer for Intersectional Fairness [16.993547305381327]
We introduce APFEx, the first framework to explicitly model intersectional fairness as a joint optimization problem.<n>APFEx combines adaptive multi-objectives, gradient weighting, and exploration strategies to navigate fairness-accuracy trade-offs.<n>Experiments on four real-world datasets demonstrate APFEx's superiority, reducing fairness violations while maintaining competitive accuracy.
arXiv Detail & Related papers (2025-09-17T11:13:22Z) - FedFACT: A Provable Framework for Controllable Group-Fairness Calibration in Federated Learning [23.38141950440522]
We propose a controllable federated group-fairness calibration framework, named FedFACT.<n>FedFACT identifies the Bayes-optimal classifiers under both global and local fairness constraints.<n>We show that FedFACT consistently outperforms baselines in balancing accuracy and global-local fairness.
arXiv Detail & Related papers (2025-06-04T09:39:57Z) - Bridging Jensen Gap for Max-Min Group Fairness Optimization in Recommendation [63.66719748453878]
Group max-min fairness (MMF) is commonly used in fairness-aware recommender systems (RS) as an optimization objective.<n>We present an efficient and effective algorithm named FairDual, which utilizes a dual optimization technique to minimize the Jensen gap.<n>Our theoretical analysis demonstrates that FairDual can achieve a sub-linear convergence rate to the globally optimal solution.
arXiv Detail & Related papers (2025-02-13T13:33:45Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Scaff-PD: Communication Efficient Fair and Robust Federated Learning [92.61502701658732]
Scaff-PD is a fast and communication-efficient algorithm for distributionally robust federated learning.
Our results suggest that Scaff-PD is a promising approach for federated learning in resource-constrained and heterogeneous settings.
arXiv Detail & Related papers (2023-07-25T10:04:33Z) - Learning Fair Classifiers via Min-Max F-divergence Regularization [13.81078324883519]
We introduce a novel min-max F-divergence regularization framework for learning fair classification models.
We show that F-divergence measures possess convexity and differentiability properties.
We show that the proposed framework achieves state-of-the-art performance with respect to the trade-off between accuracy and fairness.
arXiv Detail & Related papers (2023-06-28T20:42:04Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Pareto Efficient Fairness in Supervised Learning: From Extraction to
Tracing [26.704236797908177]
algorithmic decision-making systems are becoming more pervasive.
Due to the inherent trade-off between measures and accuracy, it is desirable to ensure the trade-off between overall loss and other criteria.
We propose a definition-agnostic, meaning that any well-defined notion of can be reduced to the PEF notion.
arXiv Detail & Related papers (2021-04-04T15:49:35Z) - Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation [61.317911756566126]
We propose a Towards Fair Knowledge Transfer framework to handle the fairness challenge in imbalanced cross-domain learning.
Specifically, a novel cross-domain mixup generation is exploited to augment the minority source set with target information to enhance fairness.
Our model significantly improves over 20% on two benchmarks in terms of the overall accuracy.
arXiv Detail & Related papers (2020-10-23T06:29:09Z) - HyperFair: A Soft Approach to Integrating Fairness Criteria [17.770533330914102]
We introduce HyperFair, a framework for enforcing soft fairness constraints in a hybrid recommender system.
We propose two ways to employ the methods we introduce: first as an extension of a probabilistic soft logic recommender system template.
We empirically validate our approach by implementing multiple HyperFair hybrid recommenders and compare them to a state-of-the-art fair recommender.
arXiv Detail & Related papers (2020-09-05T05:00:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.