SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles
- URL: http://arxiv.org/abs/2204.05157v1
- Date: Mon, 11 Apr 2022 14:42:54 GMT
- Title: SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles
- Authors: Cuong Tran, Keyu Zhu, Ferdinando Fioretto, Pascal Van Hentenryck
- Abstract summary: This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
- Score: 50.90773979394264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A critical concern in data-driven processes is to build models whose outcomes
do not discriminate against some demographic groups, including gender,
ethnicity, or age. To ensure non-discrimination in learning tasks, knowledge of
the group attributes is essential. However, in practice, these attributes may
not be available due to legal and ethical requirements. To address this
challenge, this paper studies a model that protects the privacy of the
individuals' sensitive information while also allowing it to learn
non-discriminatory predictors. A key characteristic of the proposed model is to
enable the adoption of off-the-selves and non-private fair models to create a
privacy-preserving and fair model. The paper analyzes the relation between
accuracy, privacy, and fairness, and the experimental evaluation illustrates
the benefits of the proposed models on several prediction tasks. In particular,
this proposal is the first to allow both scalable and accurate training of
private and fair models for very large neural networks.
Related papers
- Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Cross-model Fairness: Empirical Study of Fairness and Ethics Under Model Multiplicity [10.144058870887061]
We argue that individuals can be harmed when one predictor is chosen ad hoc from a group of equally well performing models.
Our findings suggest that such unfairness can be readily found in real life and it may be difficult to mitigate by technical means alone.
arXiv Detail & Related papers (2022-03-14T14:33:39Z) - A Fairness Analysis on Private Aggregation of Teacher Ensembles [31.388212637482365]
The Private Aggregation of Teacher Ensembles (PATE) is an important private machine learning framework.
This paper asks whether this privacy-preserving framework introduces or exacerbates bias and unfairness.
It shows that PATE can introduce accuracy disparity among individuals and groups of individuals.
arXiv Detail & Related papers (2021-09-17T16:19:24Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Improving Fairness and Privacy in Selection Problems [21.293367386282902]
We study the possibility of using a differentially private exponential mechanism as a post-processing step to improve both fairness and privacy of supervised learning models.
We show that the exponential mechanism can improve both privacy and fairness, with a slight decrease in accuracy compared to the model without post-processing.
arXiv Detail & Related papers (2020-12-07T15:55:28Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - Differentially Private Deep Learning with Smooth Sensitivity [144.31324628007403]
We study privacy concerns through the lens of differential privacy.
In this framework, privacy guarantees are generally obtained by perturbing models in such a way that specifics of data used to train the model are made ambiguous.
One of the most important techniques used in previous works involves an ensemble of teacher models, which return information to a student based on a noisy voting procedure.
In this work, we propose a novel voting mechanism with smooth sensitivity, which we call Immutable Noisy ArgMax, that, under certain conditions, can bear very large random noising from the teacher without affecting the useful information transferred to the student
arXiv Detail & Related papers (2020-03-01T15:38:00Z) - Fairness-Aware Learning with Prejudice Free Representations [2.398608007786179]
We propose a novel algorithm that can effectively identify and treat latent discriminating features.
The approach helps to collect discrimination-free features that would improve the model performance.
arXiv Detail & Related papers (2020-02-26T10:06:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.