FairDP: Certified Fairness with Differential Privacy
- URL: http://arxiv.org/abs/2305.16474v2
- Date: Mon, 21 Aug 2023 20:09:24 GMT
- Title: FairDP: Certified Fairness with Differential Privacy
- Authors: Khang Tran, Ferdinando Fioretto, Issa Khalil, My T. Thai, NhatHai Phan
- Abstract summary: This paper introduces FairDP, a novel mechanism designed to achieve certified fairness with differential privacy (DP)
FairDP independently trains models for distinct individual groups, using group-specific clipping terms to assess and bound the disparate impacts of DP.
Extensive theoretical and empirical analyses validate the efficacy of FairDP and improved trade-offs between model utility, privacy, and fairness compared with existing methods.
- Score: 59.56441077684935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces FairDP, a novel mechanism designed to achieve certified
fairness with differential privacy (DP). FairDP independently trains models for
distinct individual groups, using group-specific clipping terms to assess and
bound the disparate impacts of DP. Throughout the training process, the
mechanism progressively integrates knowledge from group models to formulate a
comprehensive model that balances privacy, utility, and fairness in downstream
tasks. Extensive theoretical and empirical analyses validate the efficacy of
FairDP and improved trade-offs between model utility, privacy, and fairness
compared with existing methods.
Related papers
- MITA: Bridging the Gap between Model and Data for Test-time Adaptation [68.62509948690698]
Test-Time Adaptation (TTA) has emerged as a promising paradigm for enhancing the generalizability of models.
We propose Meet-In-The-Middle based MITA, which introduces energy-based optimization to encourage mutual adaptation of the model and data from opposing directions.
arXiv Detail & Related papers (2024-10-12T07:02:33Z) - CorBin-FL: A Differentially Private Federated Learning Mechanism using Common Randomness [6.881974834597426]
Federated learning (FL) has emerged as a promising framework for distributed machine learning.
We introduce CorBin-FL, a privacy mechanism that uses correlated binary quantization to achieve differential privacy.
We also propose AugCorBin-FL, an extension that, in addition to PLDP, user-level and sample-level central differential privacy guarantees.
arXiv Detail & Related papers (2024-09-20T00:23:44Z) - Conformal Diffusion Models for Individual Treatment Effect Estimation and Inference [6.406853903837333]
Individual treatment effect offers the most granular measure of treatment effect on an individual level.
We propose a novel conformal diffusion model-based approach that addresses those intricate challenges.
arXiv Detail & Related papers (2024-08-02T21:35:08Z) - Incentives in Private Collaborative Machine Learning [56.84263918489519]
Collaborative machine learning involves training models on data from multiple parties.
We introduce differential privacy (DP) as an incentive.
We empirically demonstrate the effectiveness and practicality of our approach on synthetic and real-world datasets.
arXiv Detail & Related papers (2024-04-02T06:28:22Z) - Spectral Co-Distillation for Personalized Federated Learning [69.97016362754319]
We propose a novel distillation method based on model spectrum information to better capture generic versus personalized representations.
We also introduce a co-distillation framework that establishes a two-way bridge between generic and personalized model training.
We demonstrate the outperformance and efficacy of our proposed spectral co-distillation method, as well as our wait-free training protocol.
arXiv Detail & Related papers (2024-01-29T16:01:38Z) - Distributional Counterfactual Explanations With Optimal Transport [7.597676579494146]
Counterfactual explanations (CE) are the de facto method for providing insights into black-box decision-making models.
This paper proposes distributional counterfactual explanation (DCE), shifting focus to the distributional properties of observed and counterfactual data.
arXiv Detail & Related papers (2024-01-23T21:48:52Z) - Automated discovery of trade-off between utility, privacy and fairness
in machine learning models [8.328861861105889]
We show how PFairDP can be used to replicate known results that were achieved through manual constraint setting process.
We further demonstrate effectiveness of PFairDP with experiments on multiple models and datasets.
arXiv Detail & Related papers (2023-11-27T10:28:44Z) - Leveraging Diffusion Disentangled Representations to Mitigate Shortcuts
in Underspecified Visual Tasks [92.32670915472099]
We propose an ensemble diversification framework exploiting the generation of synthetic counterfactuals using Diffusion Probabilistic Models (DPMs)
We show that diffusion-guided diversification can lead models to avert attention from shortcut cues, achieving ensemble diversity performance comparable to previous methods requiring additional data collection.
arXiv Detail & Related papers (2023-10-03T17:37:52Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fairness by Explicability and Adversarial SHAP Learning [0.0]
We propose a new definition of fairness that emphasises the role of an external auditor and model explicability.
We develop a framework for mitigating model bias using regularizations constructed from the SHAP values of an adversarial surrogate model.
We demonstrate our approaches using gradient and adaptive boosting on: a synthetic dataset, the UCI Adult (Census) dataset and a real-world credit scoring dataset.
arXiv Detail & Related papers (2020-03-11T14:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.