A Fairness Analysis on Private Aggregation of Teacher Ensembles
- URL: http://arxiv.org/abs/2109.08630v1
- Date: Fri, 17 Sep 2021 16:19:24 GMT
- Title: A Fairness Analysis on Private Aggregation of Teacher Ensembles
- Authors: Cuong Tran, My H. Dinh, Kyle Beiter, Ferdinando Fioretto
- Abstract summary: The Private Aggregation of Teacher Ensembles (PATE) is an important private machine learning framework.
This paper asks whether this privacy-preserving framework introduces or exacerbates bias and unfairness.
It shows that PATE can introduce accuracy disparity among individuals and groups of individuals.
- Score: 31.388212637482365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Private Aggregation of Teacher Ensembles (PATE) is an important private
machine learning framework. It combines multiple learning models used as
teachers for a student model that learns to predict an output chosen by noisy
voting among the teachers. The resulting model satisfies differential privacy
and has been shown effective in learning high-quality private models in
semisupervised settings or when one wishes to protect the data labels.
This paper asks whether this privacy-preserving framework introduces or
exacerbates bias and unfairness and shows that PATE can introduce accuracy
disparity among individuals and groups of individuals. The paper analyzes which
algorithmic and data properties are responsible for the disproportionate
impacts, why these aspects are affecting different groups disproportionately,
and proposes guidelines to mitigate these effects. The proposed approach is
evaluated on several datasets and settings.
Related papers
- Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - On the Fairness Impacts of Private Ensembles Models [44.15285550981899]
Private Aggregation of Teacher Ensembles (PATE) is a machine learning framework that enables the creation of private models.
This paper explores whether the use of PATE can result in unfairness, and demonstrates that it can lead to accuracy disparities among groups of individuals.
arXiv Detail & Related papers (2023-05-19T16:43:53Z) - "You Can't Fix What You Can't Measure": Privately Measuring Demographic
Performance Disparities in Federated Learning [78.70083858195906]
We propose differentially private mechanisms to measure differences in performance across groups while protecting the privacy of group membership.
Our results show that, contrary to what prior work suggested, protecting privacy is not necessarily in conflict with identifying performance disparities of federated models.
arXiv Detail & Related papers (2022-06-24T09:46:43Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Differentially Private Deep Learning under the Fairness Lens [34.28936739262812]
Differential Privacy (DP) is an important privacy-enhancing technology for private machine learning systems.
It allows to measure and bound the risk associated with an individual participation in a computation.
It was recently observed that DP learning systems may exacerbate bias and unfairness for different groups of individuals.
arXiv Detail & Related papers (2021-06-04T19:10:09Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - Neither Private Nor Fair: Impact of Data Imbalance on Utility and
Fairness in Differential Privacy [5.416049433853457]
We study how different levels of imbalance in the data affect the accuracy and the fairness of the decisions made by the model.
We demonstrate that even small imbalances and loose privacy guarantees can cause disparate impacts.
arXiv Detail & Related papers (2020-09-10T18:35:49Z) - Differentially Private Deep Learning with Smooth Sensitivity [144.31324628007403]
We study privacy concerns through the lens of differential privacy.
In this framework, privacy guarantees are generally obtained by perturbing models in such a way that specifics of data used to train the model are made ambiguous.
One of the most important techniques used in previous works involves an ensemble of teacher models, which return information to a student based on a noisy voting procedure.
In this work, we propose a novel voting mechanism with smooth sensitivity, which we call Immutable Noisy ArgMax, that, under certain conditions, can bear very large random noising from the teacher without affecting the useful information transferred to the student
arXiv Detail & Related papers (2020-03-01T15:38:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.