Enforcing Fairness Where It Matters: An Approach Based on Difference-of-Convex Constraints
- URL: http://arxiv.org/abs/2505.12530v1
- Date: Sun, 18 May 2025 19:50:01 GMT
- Title: Enforcing Fairness Where It Matters: An Approach Based on Difference-of-Convex Constraints
- Authors: Yutian He, Yankun Huang, Yao Yao, Qihang Lin,
- Abstract summary: We focus on achieving full fairness across all score ranges by predictive models, ensuring in both high and low-scoring populations.<n>We propose a novel score of interest as the middle where decisions are most contested, while maintaining flexibility in other regions.<n>We introduce two statistical metrics to rigorously evaluate fairness within a given score range.
- Score: 12.054667230143803
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Fairness in machine learning has become a critical concern, particularly in high-stakes applications. Existing approaches often focus on achieving full fairness across all score ranges generated by predictive models, ensuring fairness in both high and low-scoring populations. However, this stringent requirement can compromise predictive performance and may not align with the practical fairness concerns of stakeholders. In this work, we propose a novel framework for building partially fair machine learning models, which enforce fairness within a specific score range of interest, such as the middle range where decisions are most contested, while maintaining flexibility in other regions. We introduce two statistical metrics to rigorously evaluate partial fairness within a given score range, such as the top 20%-40% of scores. To achieve partial fairness, we propose an in-processing method by formulating the model training problem as constrained optimization with difference-of-convex constraints, which can be solved by an inexact difference-of-convex algorithm (IDCA). We provide the complexity analysis of IDCA for finding a nearly KKT point. Through numerical experiments on real-world datasets, we demonstrate that our framework achieves high predictive performance while enforcing partial fairness where it matters most.
Related papers
- FedFACT: A Provable Framework for Controllable Group-Fairness Calibration in Federated Learning [13.575259448363557]
We propose a controllable group-fairness calibration framework, named FedFACT.<n>FedFACT identifies the Bayes-optimal classifiers under both global and local fairness constraints.<n>Experiments on multiple datasets demonstrate that FedFACT consistently outperforms baselines in balancing accuracy and global-local fairness.
arXiv Detail & Related papers (2025-06-04T09:39:57Z) - Fairness-aware Bayes optimal functional classification [7.252020628689346]
Algorithmic fairness has become a central topic in machine learning.<n>We study the classification of functional data under fairness constraints.<n>We propose a unified framework for fairness-aware functional classification.
arXiv Detail & Related papers (2025-05-14T15:22:09Z) - Federated Learning with Relative Fairness [6.460475042590685]
This paper proposes a federated learning framework designed to achieve textitrelative fairness for clients.
The proposed framework uses a minimax problem approach to minimize relative unfairness, extending previous methods in distributionally robust optimization (DRO)
A novel fairness index, based on the ratio between large and small losses among clients, is introduced, allowing the framework to assess and improve the relative fairness of trained models.
arXiv Detail & Related papers (2024-11-02T07:12:49Z) - Learning Fair Classifiers via Min-Max F-divergence Regularization [13.81078324883519]
We introduce a novel min-max F-divergence regularization framework for learning fair classification models.
We show that F-divergence measures possess convexity and differentiability properties.
We show that the proposed framework achieves state-of-the-art performance with respect to the trade-off between accuracy and fairness.
arXiv Detail & Related papers (2023-06-28T20:42:04Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Fair and Optimal Classification via Post-Processing [10.163721748735801]
This paper provides a complete characterization of the inherent tradeoff of demographic parity on classification problems.
We show that the minimum error rate achievable by randomized and attribute-aware fair classifiers is given by the optimal value of a Wasserstein-barycenter problem.
arXiv Detail & Related papers (2022-11-03T00:04:04Z) - Relational Proxies: Emergent Relationships as Fine-Grained
Discriminators [52.17542855760418]
We propose a novel approach that leverages information between the global and local part of an object for encoding its label.
We design Proxies based on our theoretical findings and evaluate it on seven challenging fine-grained benchmark datasets.
We also experimentally validate our theory and obtain consistent results across multiple benchmarks.
arXiv Detail & Related papers (2022-10-05T11:08:04Z) - Fair and Consistent Federated Learning [48.19977689926562]
Federated learning (FL) has gain growing interests for its capability of learning from distributed data sources collectively.
We propose an FL framework to jointly consider performance consistency and algorithmic fairness across different local clients.
arXiv Detail & Related papers (2021-08-19T01:56:08Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.