APFEx: Adaptive Pareto Front Explorer for Intersectional Fairness
- URL: http://arxiv.org/abs/2509.13908v2
- Date: Tue, 23 Sep 2025 15:27:29 GMT
- Title: APFEx: Adaptive Pareto Front Explorer for Intersectional Fairness
- Authors: Priyobrata Mondal, Faizanuddin Ansari, Swagatam Das,
- Abstract summary: We introduce APFEx, the first framework to explicitly model intersectional fairness as a joint optimization problem.<n>APFEx combines adaptive multi-objectives, gradient weighting, and exploration strategies to navigate fairness-accuracy trade-offs.<n>Experiments on four real-world datasets demonstrate APFEx's superiority, reducing fairness violations while maintaining competitive accuracy.
- Score: 16.993547305381327
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ensuring fairness in machine learning models is critical, especially when biases compound across intersecting protected attributes like race, gender, and age. While existing methods address fairness for single attributes, they fail to capture the nuanced, multiplicative biases faced by intersectional subgroups. We introduce Adaptive Pareto Front Explorer (APFEx), the first framework to explicitly model intersectional fairness as a joint optimization problem over the Cartesian product of sensitive attributes. APFEx combines three key innovations- (1) an adaptive multi-objective optimizer that dynamically switches between Pareto cone projection, gradient weighting, and exploration strategies to navigate fairness-accuracy trade-offs, (2) differentiable intersectional fairness metrics enabling gradient-based optimization of non-smooth subgroup disparities, and (3) theoretical guarantees of convergence to Pareto-optimal solutions. Experiments on four real-world datasets demonstrate APFEx's superiority, reducing fairness violations while maintaining competitive accuracy. Our work bridges a critical gap in fair ML, providing a scalable, model-agnostic solution for intersectional fairness.
Related papers
- FairRF: Multi-Objective Search for Single and Intersectional Software Fairness [6.155605380087007]
We introduce FairRF, a novel approach based on multi-objective evolutionary search to optimise fairness and effectiveness in classification tasks.<n>We conduct an extensive empirical evaluation of FairRF against 26 different baselines in 11 different scenarios using five effectiveness and three fairness metrics.
arXiv Detail & Related papers (2026-01-12T13:42:45Z) - Fairness-Aware Insurance Pricing: A Multi-Objective Optimization Approach [1.529342790344802]
Machine learning improves predictive accuracy in insurance pricing but exacerbates trade-offs between competing fairness criteria across different discrimination measures.<n>We propose a novel multi-objective optimization framework that jointly optimize all four criteria via the Non-dominated Sorting Genetic Algorithm II (NSGA-II)<n>Our results show that XGBoost outperforms GLM in accuracy but amplifies fairness disparities; the Orthogonal model excels in group fairness, while Synthetic Control leads in individual and counterfactual fairness.
arXiv Detail & Related papers (2025-12-31T09:42:03Z) - OrthAlign: Orthogonal Subspace Decomposition for Non-Interfering Multi-Objective Alignment [61.02595549125661]
Large language model (LLM) alignment faces a critical dilemma when addressing multiple human preferences.<n>We present OrthAlign, an innovative approach to resolve gradient-level conflicts in preference alignment.<n>We show that OrthAlign achieves maximum single-preference improvements ranging from 34.61% to 50.89% after multiple-objective alignment.
arXiv Detail & Related papers (2025-09-29T11:16:30Z) - Intersectional Divergence: Measuring Fairness in Regression [21.34290540936501]
We propose a novel approach to measure intersectional fairness in regression tasks.<n>We argue that it is insufficient to measure the average error of groups without regard for imbalanced domain preferences.<n>We show how ID can be adapted into a loss function, IDLoss, that satisfies convergence guarantees and has piecewise smooth properties.
arXiv Detail & Related papers (2025-05-01T19:43:12Z) - Fairness-Aware Meta-Learning via Nash Bargaining [63.44846095241147]
We introduce a two-stage meta-learning framework to address issues of group-level fairness in machine learning.
The first stage involves the use of a Nash Bargaining Solution (NBS) to resolve hypergradient conflicts and steer the model.
We show empirical effects across various fairness objectives in six key fairness datasets and two image classification tasks.
arXiv Detail & Related papers (2024-06-11T07:34:15Z) - Controllable Preference Optimization: Toward Controllable Multi-Objective Alignment [103.12563033438715]
Alignment in artificial intelligence pursues consistency between model responses and human preferences as well as values.
Existing alignment techniques are mostly unidirectional, leading to suboptimal trade-offs and poor flexibility over various objectives.
We introduce controllable preference optimization (CPO), which explicitly specifies preference scores for different objectives.
arXiv Detail & Related papers (2024-02-29T12:12:30Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - Finding Pareto Trade-offs in Fair and Accurate Detection of Toxic Speech [10.117274664802343]
We develop a differentiable version of a popular fairness measure, Accuracy Parity, to provide balanced accuracy across demographic groups.<n>Next, we show how model-agnostic, HyperNetwork optimization can efficiently train arbitrary NLP model architectures.<n>We show the generality and efficacy of our methods across two datasets, three neural architectures, and three fairness losses.
arXiv Detail & Related papers (2022-04-15T22:11:25Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.