Intersectional Divergence: Measuring Fairness in Regression
- URL: http://arxiv.org/abs/2505.00830v2
- Date: Thu, 31 Jul 2025 17:00:35 GMT
- Title: Intersectional Divergence: Measuring Fairness in Regression
- Authors: Joe Germino, Nuno Moniz, Nitesh V. Chawla,
- Abstract summary: We propose a novel approach to measure intersectional fairness in regression tasks.<n>We argue that it is insufficient to measure the average error of groups without regard for imbalanced domain preferences.<n>We show how ID can be adapted into a loss function, IDLoss, that satisfies convergence guarantees and has piecewise smooth properties.
- Score: 21.34290540936501
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fairness in machine learning research is commonly framed in the context of classification tasks, leaving critical gaps in regression. In this paper, we propose a novel approach to measure intersectional fairness in regression tasks, going beyond the focus on single protected attributes from existing work to consider combinations of all protected attributes. Furthermore, we contend that it is insufficient to measure the average error of groups without regard for imbalanced domain preferences. Accordingly, we propose Intersectional Divergence (ID) as the first fairness measure for regression tasks that 1) describes fair model behavior across multiple protected attributes and 2) differentiates the impact of predictions in target ranges most relevant to users. We extend our proposal demonstrating how ID can be adapted into a loss function, IDLoss, that satisfies convergence guarantees and has piecewise smooth properties that enable practical optimization. Through an extensive experimental evaluation, we demonstrate how ID allows unique insights into model behavior and fairness, and how incorporating IDLoss into optimization can considerably improve single-attribute and intersectional model fairness while maintaining a competitive balance in predictive performance.
Related papers
- The Penalized Inverse Probability Measure for Conformal Classification [0.5172964916120902]
The work introduces the Penalized Inverse Probability (PIP) nonconformity score, and its regularized version RePIP, that allow the joint optimization of both efficiency and informativeness.
The work shows how PIP-based conformal classifiers exhibit precisely the desired behavior in comparison with other nonconformity measures and strike a good balance between informativeness and efficiency.
arXiv Detail & Related papers (2024-06-13T07:37:16Z) - Fairness Without Harm: An Influence-Guided Active Sampling Approach [32.173195437797766]
We aim to train models that mitigate group fairness disparity without causing harm to model accuracy.
The current data acquisition methods, such as fair active learning approaches, typically require annotating sensitive attributes.
We propose a tractable active data sampling algorithm that does not rely on training group annotations.
arXiv Detail & Related papers (2024-02-20T07:57:38Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Mean Parity Fair Regression in RKHS [43.98593032593897]
We study the fair regression problem under the notion of Mean Parity (MP) fairness.
We address this problem by leveraging reproducing kernel Hilbert space (RKHS)
We derive a corresponding regression function that can be implemented efficiently and provides interpretable tradeoffs.
arXiv Detail & Related papers (2023-02-21T02:44:50Z) - Group Fairness with Uncertainty in Sensitive Attributes [34.608332397776245]
A fair predictive model is crucial to mitigate biased decisions against minority groups in high-stakes applications.
We propose a bootstrap-based algorithm that achieves the target level of fairness despite the uncertainty in sensitive attributes.
Our algorithm is applicable to both discrete and continuous sensitive attributes and is effective in real-world classification and regression tasks.
arXiv Detail & Related papers (2023-02-16T04:33:00Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - Fair learning with Wasserstein barycenters for non-decomposable
performance measures [8.508198765617198]
We show that maximizing accuracy under the demographic parity constraint is equivalent to solving a corresponding regression problem.
We extend this result to linear-fractional classification measures (e.g., $rm F$-score, AM measure, balanced accuracy, etc.)
arXiv Detail & Related papers (2022-09-01T13:06:43Z) - Domain Adaptation with Adversarial Training on Penultimate Activations [82.9977759320565]
Enhancing model prediction confidence on unlabeled target data is an important objective in Unsupervised Domain Adaptation (UDA)
We show that this strategy is more efficient and better correlated with the objective of boosting prediction confidence than adversarial training on input images or intermediate features.
arXiv Detail & Related papers (2022-08-26T19:50:46Z) - A Two-Stage Feature Selection Approach for Robust Evaluation of
Treatment Effects in High-Dimensional Observational Data [1.4710887888397084]
We propose a novel two-stage feature selection technique called, Outcome Adaptive Elastic Net (OAENet)
OAENet is explicitly designed for making robust causal inference decisions using matching techniques.
Numerical experiments on simulated data demonstrate that OAENet significantly outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-11-27T02:54:30Z) - Selective Regression Under Fairness Criteria [30.672082160544996]
In some cases, the performance of minority group can decrease while we reduce the coverage.
We show that such an unwanted behavior can be avoided if we can construct features satisfying the sufficiency criterion.
arXiv Detail & Related papers (2021-10-28T19:05:12Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Exploiting Sample Uncertainty for Domain Adaptive Person
Re-Identification [137.9939571408506]
We estimate and exploit the credibility of the assigned pseudo-label of each sample to alleviate the influence of noisy labels.
Our uncertainty-guided optimization brings significant improvement and achieves the state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2020-12-16T04:09:04Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z) - Achieving Equalized Odds by Resampling Sensitive Attributes [13.114114427206678]
We present a flexible framework for learning predictive models that approximately satisfy the equalized odds notion of fairness.
This differentiable functional is used as a penalty driving the model parameters towards equalized odds.
We develop a formal hypothesis test to detect whether a prediction rule violates this property, the first such test in the literature.
arXiv Detail & Related papers (2020-06-08T00:18:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.