DiPSeN: Differentially Private Self-normalizing Neural Networks For
Adversarial Robustness in Federated Learning
- URL: http://arxiv.org/abs/2101.03218v1
- Date: Fri, 8 Jan 2021 20:49:56 GMT
- Title: DiPSeN: Differentially Private Self-normalizing Neural Networks For
Adversarial Robustness in Federated Learning
- Authors: Olakunle Ibitoye, M. Omair Shafiq, Ashraf Matrawy
- Abstract summary: Federated learning has proven to help protect against privacy violations and information leakage.
It introduces new risk vectors which make machine learning models more difficult to defend against adversarial samples.
We introduce DiPSeN, a Differentially Private Self-normalizing Neural Network which combines elements of differential privacy noise with self-normalizing techniques.
- Score: 6.1448102196124195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The need for robust, secure and private machine learning is an important goal
for realizing the full potential of the Internet of Things (IoT). Federated
learning has proven to help protect against privacy violations and information
leakage. However, it introduces new risk vectors which make machine learning
models more difficult to defend against adversarial samples. In this study, we
examine the role of differential privacy and self-normalization in mitigating
the risk of adversarial samples specifically in a federated learning
environment. We introduce DiPSeN, a Differentially Private Self-normalizing
Neural Network which combines elements of differential privacy noise with
self-normalizing techniques. Our empirical results on three publicly available
datasets show that DiPSeN successfully improves the adversarial robustness of a
deep learning classifier in a federated learning environment based on several
evaluation metrics.
Related papers
- Differential Privacy Mechanisms in Neural Tangent Kernel Regression [29.187250620950927]
We study differential privacy (DP) in the Neural Tangent Kernel (NTK) regression setting.
We show provable guarantees for both differential privacy and test accuracy of our NTK regression.
To our knowledge, this is the first work to provide a DP guarantee for NTK regression.
arXiv Detail & Related papers (2024-07-18T15:57:55Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - On the Privacy Effect of Data Enhancement via the Lens of Memorization [20.63044895680223]
We propose to investigate privacy from a new perspective called memorization.
Through the lens of memorization, we find that previously deployed MIAs produce misleading results as they are less likely to identify samples with higher privacy risks.
We demonstrate that the generalization gap and privacy leakage are less correlated than those of the previous results.
arXiv Detail & Related papers (2022-08-17T13:02:17Z) - Information Stealing in Federated Learning Systems Based on Generative
Adversarial Networks [0.5156484100374059]
We mounted adversarial attacks on a federated learning (FL) environment using three different datasets.
The attacks leveraged generative adversarial networks (GANs) to affect the learning process.
We reconstructed the real data of the victim from the shared global model parameters with all the applied datasets.
arXiv Detail & Related papers (2021-08-02T08:12:43Z) - NeuralDP Differentially private neural networks by design [61.675604648670095]
We propose NeuralDP, a technique for privatising activations of some layer within a neural network.
We experimentally demonstrate on two datasets that our method offers substantially improved privacy-utility trade-offs compared to DP-SGD.
arXiv Detail & Related papers (2021-07-30T12:40:19Z) - On Deep Learning with Label Differential Privacy [54.45348348861426]
We study the multi-class classification setting where the labels are considered sensitive and ought to be protected.
We propose a new algorithm for training deep neural networks with label differential privacy, and run evaluations on several datasets.
arXiv Detail & Related papers (2021-02-11T15:09:06Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Differentially private cross-silo federated learning [16.38610531397378]
Strict privacy is of paramount importance in distributed machine learning.
In this paper we combine additively homomorphic secure summation protocols with differential privacy in the so-called cross-silo federated learning setting.
We demonstrate that our proposed solutions give prediction accuracy that is comparable to the non-distributed setting.
arXiv Detail & Related papers (2020-07-10T18:15:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.