FairDP: Certified Fairness with Differential Privacy
- URL: http://arxiv.org/abs/2305.16474v3
- Date: Mon, 10 Feb 2025 20:39:55 GMT
- Title: FairDP: Certified Fairness with Differential Privacy
- Authors: Khang Tran, Ferdinando Fioretto, Issa Khalil, My T. Thai, Linh Thi Xuan Phan NhatHai Phan,
- Abstract summary: This paper introduces FairDP, a novel training mechanism designed to provide group fairness certification for the trained model's decisions.
The key idea of FairDP is to train models for distinct individual groups independently, add noise to each group's gradient for data privacy protection, and integrate knowledge from group models to formulate a model that balances privacy, utility, and fairness in downstream tasks.
- Score: 55.51579601325759
- License:
- Abstract: This paper introduces FairDP, a novel training mechanism designed to provide group fairness certification for the trained model's decisions, along with a differential privacy (DP) guarantee to protect training data. The key idea of FairDP is to train models for distinct individual groups independently, add noise to each group's gradient for data privacy protection, and progressively integrate knowledge from group models to formulate a comprehensive model that balances privacy, utility, and fairness in downstream tasks. By doing so, FairDP ensures equal contribution from each group while gaining control over the amount of DP-preserving noise added to each group's contribution. To provide fairness certification, FairDP leverages the DP-preserving noise to statistically quantify and bound fairness metrics. An extensive theoretical and empirical analysis using benchmark datasets validates the efficacy of FairDP and improved trade-offs between model utility, privacy, and fairness compared with existing methods. Our empirical results indicate that FairDP can improve fairness metrics by more than 65% on average while attaining marginal utility drop (less than 4% on average) under a rigorous DP-preservation across benchmark datasets compared with existing baselines.
Related papers
- Universally Harmonizing Differential Privacy Mechanisms for Federated Learning: Boosting Accuracy and Convergence [22.946928984205588]
Differentially private federated learning (DP-FL) is a promising technique for collaborative model training.
We propose the first DP-FL framework (namely UDP-FL) which universally harmonizes any randomization mechanism.
We show that UDP-FL exhibits substantial resilience against different inference attacks.
arXiv Detail & Related papers (2024-07-20T00:11:59Z) - Positive and Unlabeled Data: Model, Estimation, Inference, and Classification [10.44075062541605]
This study introduces a new approach to addressing positive and unlabeled (PU) data through the double exponential tilting model (DETM)
Traditional methods often fall short because they only apply to selected completely at random (SCAR) PU data.
Our DETM's dual structure effectively accommodates the more complex and underexplored selected at random PU data.
arXiv Detail & Related papers (2024-07-13T00:57:04Z) - Incentives in Private Collaborative Machine Learning [56.84263918489519]
Collaborative machine learning involves training models on data from multiple parties.
We introduce differential privacy (DP) as an incentive.
We empirically demonstrate the effectiveness and practicality of our approach on synthetic and real-world datasets.
arXiv Detail & Related papers (2024-04-02T06:28:22Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - Controllable Guarantees for Fair Outcomes via Contrastive Information
Estimation [32.37031528767224]
Controlling bias in training datasets is vital for ensuring equal treatment, or parity, between different groups in downstream applications.
We demonstrate an effective method for controlling parity through mutual information based on contrastive information estimators.
We test our approach on UCI Adult and Heritage Health datasets and demonstrate that our approach provides more informative representations across a range of desired parity thresholds.
arXiv Detail & Related papers (2021-01-11T18:57:33Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.