FairDP: Certified Fairness with Differential Privacy
- URL: http://arxiv.org/abs/2305.16474v2
- Date: Mon, 21 Aug 2023 20:09:24 GMT
- Title: FairDP: Certified Fairness with Differential Privacy
- Authors: Khang Tran, Ferdinando Fioretto, Issa Khalil, My T. Thai, NhatHai Phan
- Abstract summary: This paper introduces FairDP, a novel mechanism designed to achieve certified fairness with differential privacy (DP)
FairDP independently trains models for distinct individual groups, using group-specific clipping terms to assess and bound the disparate impacts of DP.
Extensive theoretical and empirical analyses validate the efficacy of FairDP and improved trade-offs between model utility, privacy, and fairness compared with existing methods.
- Score: 59.56441077684935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces FairDP, a novel mechanism designed to achieve certified
fairness with differential privacy (DP). FairDP independently trains models for
distinct individual groups, using group-specific clipping terms to assess and
bound the disparate impacts of DP. Throughout the training process, the
mechanism progressively integrates knowledge from group models to formulate a
comprehensive model that balances privacy, utility, and fairness in downstream
tasks. Extensive theoretical and empirical analyses validate the efficacy of
FairDP and improved trade-offs between model utility, privacy, and fairness
compared with existing methods.
Related papers
- Learning Heterogeneous Performance-Fairness Trade-offs in Federated Learning [6.6763659758988885]
HetPFL comprises Preference Sampling Adaptation (PSA) and Preference-aware Hypernet Fusion (PHF)
We prove that HetPFL converges linearly with respect to the number of rounds, under weaker assumptions than existing methods.
arXiv Detail & Related papers (2025-04-30T16:25:02Z) - PA-CFL: Privacy-Adaptive Clustered Federated Learning for Transformer-Based Sales Forecasting on Heterogeneous Retail Data [47.745068077169954]
Federated learning (FL) enables retailers to share model parameters for demand forecasting while maintaining privacy.
We propose Privacy-Adaptive Clustered Federated Learning (PA-CFL) tailored for demand forecasting on heterogeneous retail data.
arXiv Detail & Related papers (2025-03-15T18:07:54Z) - MITA: Bridging the Gap between Model and Data for Test-time Adaptation [68.62509948690698]
Test-Time Adaptation (TTA) has emerged as a promising paradigm for enhancing the generalizability of models.
We propose Meet-In-The-Middle based MITA, which introduces energy-based optimization to encourage mutual adaptation of the model and data from opposing directions.
arXiv Detail & Related papers (2024-10-12T07:02:33Z) - CorBin-FL: A Differentially Private Federated Learning Mechanism using Common Randomness [6.881974834597426]
Federated learning (FL) has emerged as a promising framework for distributed machine learning.
We introduce CorBin-FL, a privacy mechanism that uses correlated binary quantization to achieve differential privacy.
We also propose AugCorBin-FL, an extension that, in addition to PLDP, user-level and sample-level central differential privacy guarantees.
arXiv Detail & Related papers (2024-09-20T00:23:44Z) - Conformal Diffusion Models for Individual Treatment Effect Estimation and Inference [6.406853903837333]
Individual treatment effect offers the most granular measure of treatment effect on an individual level.
We propose a novel conformal diffusion model-based approach that addresses those intricate challenges.
arXiv Detail & Related papers (2024-08-02T21:35:08Z) - Incentives in Private Collaborative Machine Learning [56.84263918489519]
Collaborative machine learning involves training models on data from multiple parties.
We introduce differential privacy (DP) as an incentive.
We empirically demonstrate the effectiveness and practicality of our approach on synthetic and real-world datasets.
arXiv Detail & Related papers (2024-04-02T06:28:22Z) - Spectral Co-Distillation for Personalized Federated Learning [69.97016362754319]
We propose a novel distillation method based on model spectrum information to better capture generic versus personalized representations.
We also introduce a co-distillation framework that establishes a two-way bridge between generic and personalized model training.
We demonstrate the outperformance and efficacy of our proposed spectral co-distillation method, as well as our wait-free training protocol.
arXiv Detail & Related papers (2024-01-29T16:01:38Z) - Distributional Counterfactual Explanations With Optimal Transport [7.597676579494146]
Counterfactual explanations (CE) are the de facto method for providing insights into black-box decision-making models.
This paper proposes distributional counterfactual explanation (DCE), shifting focus to the distributional properties of observed and counterfactual data.
arXiv Detail & Related papers (2024-01-23T21:48:52Z) - Automated discovery of trade-off between utility, privacy and fairness
in machine learning models [8.328861861105889]
We show how PFairDP can be used to replicate known results that were achieved through manual constraint setting process.
We further demonstrate effectiveness of PFairDP with experiments on multiple models and datasets.
arXiv Detail & Related papers (2023-11-27T10:28:44Z) - Leveraging Diffusion Disentangled Representations to Mitigate Shortcuts
in Underspecified Visual Tasks [92.32670915472099]
We propose an ensemble diversification framework exploiting the generation of synthetic counterfactuals using Diffusion Probabilistic Models (DPMs)
We show that diffusion-guided diversification can lead models to avert attention from shortcut cues, achieving ensemble diversity performance comparable to previous methods requiring additional data collection.
arXiv Detail & Related papers (2023-10-03T17:37:52Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - Controllable Guarantees for Fair Outcomes via Contrastive Information
Estimation [32.37031528767224]
Controlling bias in training datasets is vital for ensuring equal treatment, or parity, between different groups in downstream applications.
We demonstrate an effective method for controlling parity through mutual information based on contrastive information estimators.
We test our approach on UCI Adult and Heritage Health datasets and demonstrate that our approach provides more informative representations across a range of desired parity thresholds.
arXiv Detail & Related papers (2021-01-11T18:57:33Z) - Fairness by Explicability and Adversarial SHAP Learning [0.0]
We propose a new definition of fairness that emphasises the role of an external auditor and model explicability.
We develop a framework for mitigating model bias using regularizations constructed from the SHAP values of an adversarial surrogate model.
We demonstrate our approaches using gradient and adaptive boosting on: a synthetic dataset, the UCI Adult (Census) dataset and a real-world credit scoring dataset.
arXiv Detail & Related papers (2020-03-11T14:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.