Learning with Impartiality to Walk on the Pareto Frontier of Fairness,
Privacy, and Utility
- URL: http://arxiv.org/abs/2302.09183v1
- Date: Fri, 17 Feb 2023 23:23:45 GMT
- Title: Learning with Impartiality to Walk on the Pareto Frontier of Fairness,
Privacy, and Utility
- Authors: Mohammad Yaghini, Patty Liu, Franziska Boenisch, Nicolas Papernot
- Abstract summary: We argue that machine learning pipelines should not favor one objective over another.
We propose impartially-specified models that show the inherent trade-offs between the objectives.
We provide an answer to the question of where fairness mitigation should be integrated within a privacy-aware ML pipeline.
- Score: 28.946180502706504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deploying machine learning (ML) models often requires both fairness and
privacy guarantees. Both of these objectives present unique trade-offs with the
utility (e.g., accuracy) of the model. However, the mutual interactions between
fairness, privacy, and utility are less well-understood. As a result, often
only one objective is optimized, while the others are tuned as
hyper-parameters. Because they implicitly prioritize certain objectives, such
designs bias the model in pernicious, undetectable ways. To address this, we
adopt impartiality as a principle: design of ML pipelines should not favor one
objective over another. We propose impartially-specified models, which provide
us with accurate Pareto frontiers that show the inherent trade-offs between the
objectives. Extending two canonical ML frameworks for privacy-preserving
learning, we provide two methods (FairDP-SGD and FairPATE) to train
impartially-specified models and recover the Pareto frontier. Through
theoretical privacy analysis and a comprehensive empirical study, we provide an
answer to the question of where fairness mitigation should be integrated within
a privacy-aware ML pipeline.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - CorBin-FL: A Differentially Private Federated Learning Mechanism using Common Randomness [6.881974834597426]
Federated learning (FL) has emerged as a promising framework for distributed machine learning.
We introduce CorBin-FL, a privacy mechanism that uses correlated binary quantization to achieve differential privacy.
We also propose AugCorBin-FL, an extension that, in addition to PLDP, user-level and sample-level central differential privacy guarantees.
arXiv Detail & Related papers (2024-09-20T00:23:44Z) - Fairness-Aware Meta-Learning via Nash Bargaining [63.44846095241147]
We introduce a two-stage meta-learning framework to address issues of group-level fairness in machine learning.
The first stage involves the use of a Nash Bargaining Solution (NBS) to resolve hypergradient conflicts and steer the model.
We show empirical effects across various fairness objectives in six key fairness datasets and two image classification tasks.
arXiv Detail & Related papers (2024-06-11T07:34:15Z) - De-amplifying Bias from Differential Privacy in Language Model
Fine-tuning [10.847913815093179]
Fairness and privacy are two important values machine learning (ML) practitioners often seek to operationalize in models.
We show that DP amplifies gender, racial, and religious bias when fine-tuning large language models.
We demonstrate that Counterfactual Data Augmentation, a known method for addressing bias, also mitigates bias amplification by DP.
arXiv Detail & Related papers (2024-02-07T00:30:58Z) - Marginal Debiased Network for Fair Visual Recognition [59.05212866862219]
We propose a novel marginal debiased network (MDN) to learn debiased representations.
Our MDN can achieve a remarkable performance on under-represented samples.
arXiv Detail & Related papers (2024-01-04T08:57:09Z) - Automated discovery of trade-off between utility, privacy and fairness
in machine learning models [8.328861861105889]
We show how PFairDP can be used to replicate known results that were achieved through manual constraint setting process.
We further demonstrate effectiveness of PFairDP with experiments on multiple models and datasets.
arXiv Detail & Related papers (2023-11-27T10:28:44Z) - Holistic Survey of Privacy and Fairness in Machine Learning [10.399352534861292]
Privacy and fairness are crucial pillars of responsible Artificial Intelligence (AI) and trustworthy Machine Learning (ML)
Despite significant interest, there remains an immediate demand for more in-depth research to unravel how these two objectives can be simultaneously integrated into ML models.
We provide a thorough review of privacy and fairness in ML, including supervised, unsupervised, semi-supervised, and reinforcement learning.
arXiv Detail & Related papers (2023-07-28T23:39:29Z) - Universal Semi-supervised Model Adaptation via Collaborative Consistency
Training [92.52892510093037]
We introduce a realistic and challenging domain adaptation problem called Universal Semi-supervised Model Adaptation (USMA)
We propose a collaborative consistency training framework that regularizes the prediction consistency between two models.
Experimental results demonstrate the effectiveness of our method on several benchmark datasets.
arXiv Detail & Related papers (2023-07-07T08:19:40Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fair Bayesian Optimization [25.80374249896801]
We introduce a general constrained Bayesian optimization framework to optimize the performance of any machine learning (ML) model.
We apply BO with fairness constraints to a range of popular models, including random forests, boosting, and neural networks.
We show that our approach is competitive with specialized techniques that enforce model-specific fairness constraints.
arXiv Detail & Related papers (2020-06-09T08:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.