Investigating Trade-offs in Utility, Fairness and Differential Privacy
in Neural Networks
- URL: http://arxiv.org/abs/2102.05975v1
- Date: Thu, 11 Feb 2021 12:33:19 GMT
- Title: Investigating Trade-offs in Utility, Fairness and Differential Privacy
in Neural Networks
- Authors: Marlotte Pannekoek, Giacomo Spigler
- Abstract summary: Machine learning algorithms must be fair and protect the privacy of those whose data are being used.
implementing privacy and fairness constraints might come at the cost of utility.
This paper investigates the privacy-utility-fairness trade-off in neural networks.
- Score: 7.6146285961466
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To enable an ethical and legal use of machine learning algorithms, they must
both be fair and protect the privacy of those whose data are being used.
However, implementing privacy and fairness constraints might come at the cost
of utility (Jayaraman & Evans, 2019; Gong et al., 2020). This paper
investigates the privacy-utility-fairness trade-off in neural networks by
comparing a Simple (S-NN), a Fair (F-NN), a Differentially Private (DP-NN), and
a Differentially Private and Fair Neural Network (DPF-NN) to evaluate
differences in performance on metrics for privacy (epsilon, delta), fairness
(risk difference), and utility (accuracy). In the scenario with the highest
considered privacy guarantees (epsilon = 0.1, delta = 0.00001), the DPF-NN was
found to achieve better risk difference than all the other neural networks with
only a marginally lower accuracy than the S-NN and DP-NN. This model is
considered fair as it achieved a risk difference below the strict (0.05) and
lenient (0.1) thresholds. However, while the accuracy of the proposed model
improved on previous work from Xu, Yuan and Wu (2019), the risk difference was
found to be worse.
Related papers
- Are Neuromorphic Architectures Inherently Privacy-preserving? An Exploratory Study [3.4673556247932225]
Spiking Neural Networks (SNNs) are emerging as promising alternatives to Artificial Neural Networks (ANNs)
This paper examines whether SNNs inherently offer better privacy.
We analyze the impact of learning algorithms (surrogate gradient and evolutionary), frameworks (snnTorch, TENNLab, LAVA), and parameters on SNN privacy.
arXiv Detail & Related papers (2024-11-10T22:18:53Z) - MAPPING: Debiasing Graph Neural Networks for Fair Node Classification
with Limited Sensitive Information Leakage [1.8238848494579714]
We propose a novel model-agnostic debiasing framework named MAPPING for fair node classification.
Our results show that MAPPING can achieve better trade-offs between utility and fairness, and privacy risks of sensitive information leakage.
arXiv Detail & Related papers (2024-01-23T14:59:46Z) - Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation [79.64626707978418]
We propose using link local differential privacy over decentralized nodes to train graph neural networks.
Our approach spends the privacy budget separately on links and degrees of the graph for the server to better denoise the graph topology.
Our approach outperforms existing methods in terms of accuracy under varying privacy budgets.
arXiv Detail & Related papers (2023-09-06T17:53:31Z) - Threshold KNN-Shapley: A Linear-Time and Privacy-Friendly Approach to
Data Valuation [57.36638157108914]
Data valuation aims to quantify the usefulness of individual data sources in training machine learning (ML) models.
However, data valuation faces significant yet frequently overlooked privacy challenges despite its importance.
This paper studies these challenges with a focus on KNN-Shapley, one of the most practical data valuation methods nowadays.
arXiv Detail & Related papers (2023-08-30T02:12:00Z) - Individual Fairness in Bayesian Neural Networks [9.386341375741225]
We study Individual Fairness (IF) for Bayesian neural networks (BNNs)
We use bounds on statistical sampling over the input space and the relationship between adversarial and individual fairness to derive a framework for the robustness estimation of $epsilon$-$delta$-IF.
We find that BNNs trained by means of approximate Bayesian inference consistently tend to be markedly more individually fair than their deterministic counterparts.
arXiv Detail & Related papers (2023-04-21T09:12:14Z) - Unraveling Privacy Risks of Individual Fairness in Graph Neural Networks [66.0143583366533]
Graph neural networks (GNNs) have gained significant attraction due to their expansive real-world applications.
To build trustworthy GNNs, two aspects - fairness and privacy - have emerged as critical considerations.
Previous studies have separately examined the fairness and privacy aspects of GNNs, revealing their trade-off with GNN performance.
Yet, the interplay between these two aspects remains unexplored.
arXiv Detail & Related papers (2023-01-30T14:52:23Z) - Heterogeneous Randomized Response for Differential Privacy in Graph
Neural Networks [18.4005860362025]
Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs)
We propose a novel mechanism to protect nodes' features and edges against PIAs under differential privacy (DP) guarantees.
We derive significantly better randomization probabilities and tighter error bounds at both levels of nodes' features and edges.
arXiv Detail & Related papers (2022-11-10T18:52:46Z) - TAN Without a Burn: Scaling Laws of DP-SGD [70.7364032297978]
Differentially Private methods for training Deep Neural Networks (DNNs) have progressed recently.
We decouple privacy analysis and experimental behavior of noisy training to explore the trade-off with minimal computational requirements.
We apply the proposed method on CIFAR-10 and ImageNet and, in particular, strongly improve the state-of-the-art on ImageNet with a +9 points gain in top-1 accuracy.
arXiv Detail & Related papers (2022-10-07T08:44:35Z) - NeuronFair: Interpretable White-Box Fairness Testing through Biased
Neuron Identification [25.211265460381075]
Deep neural networks (DNNs) have demonstrated their outperformance in various domains.
It is crucial to conduct fairness testing before DNNs are reliably deployed to sensitive domains.
We propose NeuronFair, a new fairness testing framework that differs from previous work in several key aspects.
arXiv Detail & Related papers (2021-12-25T09:19:39Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.