Feature Space Hijacking Attacks against Differentially Private Split
Learning
- URL: http://arxiv.org/abs/2201.04018v1
- Date: Tue, 11 Jan 2022 16:06:18 GMT
- Title: Feature Space Hijacking Attacks against Differentially Private Split
Learning
- Authors: Grzegorz Gawron, Philip Stubbings
- Abstract summary: Split learning and differential privacy are technologies with growing potential to help with privacy-compliant advanced analytics on distributed datasets.
This work is applying a recent feature space hijacking attack (FSHA) to the learning process of a split neural network enhanced with differential privacy (DP), using a client-side off-the-shelf DP.
The FSHA attack obtains client's private data reconstruction with low error rates at arbitrarily set DP epsilon levels.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Split learning and differential privacy are technologies with growing
potential to help with privacy-compliant advanced analytics on distributed
datasets. Attacks against split learning are an important evaluation tool and
have been receiving increased research attention recently. This work's
contribution is applying a recent feature space hijacking attack (FSHA) to the
learning process of a split neural network enhanced with differential privacy
(DP), using a client-side off-the-shelf DP optimizer. The FSHA attack obtains
client's private data reconstruction with low error rates at arbitrarily set DP
epsilon levels. We also experiment with dimensionality reduction as a potential
attack risk mitigation and show that it might help to some extent. We discuss
the reasons why differential privacy is not an effective protection in this
setting and mention potential other risk mitigation methods.
Related papers
- Mitigating Disparate Impact of Differential Privacy in Federated Learning through Robust Clustering [4.768272342753616]
Federated Learning (FL) is a decentralized machine learning (ML) approach that keeps data localized and often incorporates Differential Privacy (DP) to enhance privacy guarantees.
Recent work has attempted to address performance fairness in vanilla FL through clustering, but this method remains sensitive and prone to errors.
We propose a novel clustered DPFL algorithm designed to effectively identify clients' clusters in highly heterogeneous settings.
arXiv Detail & Related papers (2024-05-29T17:03:31Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - Differentially private sliced inverse regression in the federated
paradigm [3.539008590223188]
We extend Sliced inverse regression (SIR) to address the challenges of decentralized data, prioritizing privacy and communication efficiency.
Our approach, named as federated sliced inverse regression (FSIR), facilitates collaborative estimation of the sufficient dimension reduction subspace among multiple clients.
arXiv Detail & Related papers (2023-06-10T00:32:39Z) - Discriminative Adversarial Privacy: Balancing Accuracy and Membership
Privacy in Neural Networks [7.0895962209555465]
Discriminative Adversarial Privacy (DAP) is a learning technique designed to achieve a balance between model performance, speed, and privacy.
DAP relies on adversarial training based on a novel loss function able to minimise the prediction error while maximising the MIA's error.
In addition, we introduce a novel metric named Accuracy Over Privacy (AOP) to capture the performance-privacy trade-off.
arXiv Detail & Related papers (2023-06-05T17:25:45Z) - Secure Split Learning against Property Inference, Data Reconstruction,
and Feature Space Hijacking Attacks [5.209316363034367]
Split learning of deep neural networks (SplitNN) has provided a promising solution to learning jointly for the mutual interest of a guest and a host.
SplitNN creates a new attack surface for the adversarial participant, holding back its practical use in the real world.
This paper investigates the adversarial effects of highly threatening attacks, including property inference, data reconstruction, and feature hijacking attacks.
We propose a new activation function named R3eLU, transferring private smashed data and partial loss into randomized responses.
arXiv Detail & Related papers (2023-04-19T09:08:23Z) - Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis
Testing: A Lesson From Fano [83.5933307263932]
We study data reconstruction attacks for discrete data and analyze it under the framework of hypothesis testing.
We show that if the underlying private data takes values from a set of size $M$, then the target privacy parameter $epsilon$ can be $O(log M)$ before the adversary gains significant inferential power.
arXiv Detail & Related papers (2022-10-24T23:50:12Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Federated Deep Learning with Bayesian Privacy [28.99404058773532]
Federated learning (FL) aims to protect data privacy by cooperatively learning a model without sharing private data among users.
Homomorphic encryption (HE) based methods provide secure privacy protections but suffer from extremely high computational and communication overheads.
Deep learning with Differential Privacy (DP) was implemented as a practical learning algorithm at a manageable cost in complexity.
arXiv Detail & Related papers (2021-09-27T12:48:40Z) - Privacy-Preserving Federated Learning on Partitioned Attributes [6.661716208346423]
Federated learning empowers collaborative training without exposing local data or models.
We introduce an adversarial learning based procedure which tunes a local model to release privacy-preserving intermediate representations.
To alleviate the accuracy decline, we propose a defense method based on the forward-backward splitting algorithm.
arXiv Detail & Related papers (2021-04-29T14:49:14Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.