Optimal Fairness under Local Differential Privacy
- URL: http://arxiv.org/abs/2511.16377v1
- Date: Thu, 20 Nov 2025 14:00:15 GMT
- Title: Optimal Fairness under Local Differential Privacy
- Authors: Hrad Ghoukasian, Shahab Asoodeh,
- Abstract summary: We investigate how to optimally design local differential privacy mechanisms that reduce data unfairness and thereby improve fairness in downstream classification.<n>As a theoretical contribution, we establish that for discrimination-accuracy optimal classifiers, reducing data unfairness necessarily leads to lower classification unfairness.<n> Empirically, we demonstrate that our approach consistently outperforms existing LDP mechanisms in reducing data unfairness across diverse datasets and fairness metrics.
- Score: 2.889268075288957
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate how to optimally design local differential privacy (LDP) mechanisms that reduce data unfairness and thereby improve fairness in downstream classification. We first derive a closed-form optimal mechanism for binary sensitive attributes and then develop a tractable optimization framework that yields the corresponding optimal mechanism for multi-valued attributes. As a theoretical contribution, we establish that for discrimination-accuracy optimal classifiers, reducing data unfairness necessarily leads to lower classification unfairness, thus providing a direct link between privacy-aware pre-processing and classification fairness. Empirically, we demonstrate that our approach consistently outperforms existing LDP mechanisms in reducing data unfairness across diverse datasets and fairness metrics, while maintaining accuracy close to that of non-private models. Moreover, compared with leading pre-processing and post-processing fairness methods, our mechanism achieves a more favorable accuracy-fairness trade-off while simultaneously preserving the privacy of sensitive attributes. Taken together, these results highlight LDP as a principled and effective pre-processing fairness intervention technique.
Related papers
- Fairness Is Not Just Ethical: Performance Trade-Off via Data Correlation Tuning to Mitigate Bias in ML Software [11.766190391560684]
Correlation Tuning (CoT) is a novel pre-processing approach designed to mitigate bias by adjusting data correlations.<n>CoT increases the true positive rate of unprivileged groups by an average of 17.5%.<n>We will publicly release our experimental results and source code to facilitate future research.
arXiv Detail & Related papers (2025-12-19T23:50:27Z) - Mitigating Bias in Facial Recognition Systems: Centroid Fairness Loss Optimization [9.537960917804993]
societal demand for fair AI systems has put pressure on the research community to develop predictive models that meet new fairness criteria.<n>In particular, the variability of the errors made by certain Facial Recognition (FR) systems across specific segments of the population compromises the deployment of the latter.<n>We propose a novel post-processing approach to improve the fairness of pre-trained FR models by optimizing a regression loss which acts on centroid-based scores.
arXiv Detail & Related papers (2025-04-27T22:17:44Z) - Leveraging Robust Optimization for LLM Alignment under Distribution Shifts [51.74394601039711]
Preference alignment methods are increasingly critical for steering large language models to generate outputs consistent with human values.<n>We propose a novel distribution-aware optimization framework that improves preference alignment despite such shifts.
arXiv Detail & Related papers (2025-04-08T09:14:38Z) - Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.<n>To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.<n>Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - Learning Antidote Data to Individual Unfairness [23.119278763970037]
Individual fairness is a vital notion to describe fair treatment for individual cases.
Previous studies characterize individual fairness as a prediction-invariant problem.
We show our method resists individual unfairness at a minimal or zero cost to predictive utility.
arXiv Detail & Related papers (2022-11-29T03:32:39Z) - Fair and Optimal Classification via Post-Processing [10.163721748735801]
This paper provides a complete characterization of the inherent tradeoff of demographic parity on classification problems.
We show that the minimum error rate achievable by randomized and attribute-aware fair classifiers is given by the optimal value of a Wasserstein-barycenter problem.
arXiv Detail & Related papers (2022-11-03T00:04:04Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Group-Aware Threshold Adaptation for Fair Classification [9.496524884855557]
We introduce a novel post-processing method to optimize over multiple fairness constraints.
Our method theoretically enables a better upper bound in near optimality than existing method under same condition.
Experimental results demonstrate that our method outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-11-08T04:36:37Z) - Fairness without the sensitive attribute via Causal Variational
Autoencoder [17.675997789073907]
Due to privacy purposes and var-ious regulations such as RGPD in EU, many personal sensitive attributes are frequently not collected.
By leveraging recent developments for approximate inference, we propose an approach to fill this gap.
Based on a causal graph, we rely on a new variational auto-encoding based framework named SRCVAE to infer a sensitive information proxy.
arXiv Detail & Related papers (2021-09-10T17:12:52Z) - Accuracy and Fairness Trade-offs in Machine Learning: A Stochastic
Multi-Objective Approach [0.0]
In the application of machine learning to real-life decision-making systems, the prediction outcomes might discriminate against people with sensitive attributes, leading to unfairness.
The commonly used strategy in fair machine learning is to include fairness as a constraint or a penalization term in the minimization of the prediction loss.
In this paper, we introduce a new approach to handle fairness by formulating a multi-objective optimization problem.
arXiv Detail & Related papers (2020-08-03T18:51:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.