Topology-Based Reconstruction Prevention for Decentralised Learning
- URL: http://arxiv.org/abs/2312.05248v2
- Date: Thu, 29 Feb 2024 12:36:14 GMT
- Title: Topology-Based Reconstruction Prevention for Decentralised Learning
- Authors: Florine W. Dekker (1), Zekeriya Erkin (1), Mauro Conti (2 and 1) ((1)
Delft University of Technology, the Netherlands and (2) Universit\`a di
Padova, Italy)
- Abstract summary: We show that passive honest-but-curious adversaries can infer other users' private data after several privacy-preserving summations.
We propose the first topology-based decentralised defence against reconstruction attacks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decentralised learning has recently gained traction as an alternative to
federated learning in which both data and coordination are distributed over its
users. To preserve data confidentiality, decentralised learning relies on
differential privacy, multi-party computation, or a combination thereof.
However, running multiple privacy-preserving summations in sequence may allow
adversaries to perform reconstruction attacks. Unfortunately, current
reconstruction countermeasures either cannot trivially be adapted to the
distributed setting, or add excessive amounts of noise.
In this work, we first show that passive honest-but-curious adversaries can
infer other users' private data after several privacy-preserving summations.
For example, in subgraphs with 18 users, we show that only three passive
honest-but-curious adversaries succeed at reconstructing private data 11.0% of
the time, requiring an average of 8.8 summations per adversary. The success
rate depends only on the adversaries' direct neighbourhood, independent of the
size of the full network. We consider weak adversaries, who do not control the
graph topology and can exploit neither the inner workings of the summation
protocol nor the specifics of users' data.
We develop a mathematical understanding of how reconstruction relates to
topology and propose the first topology-based decentralised defence against
reconstruction attacks. Specifically, we show that reconstruction requires a
number of adversaries linear in the length of the network's shortest cycle.
Consequently, reconstructing private data from privacy-preserving summations is
impossible in acyclic networks.
Our work is a stepping stone for a formal theory of topology-based
reconstruction defences. Such a theory would generalise our countermeasure
beyond summation, define confidentiality in terms of entropy, and describe the
effects of differential privacy.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Data Reconstruction: When You See It and When You Don't [75.03157721978279]
We aim to "sandwich" the concept of reconstruction attacks by addressing two complementing questions.
We introduce a new definitional paradigm -- Narcissus Resiliency -- to formulate a security definition for protection against reconstruction attacks.
arXiv Detail & Related papers (2024-05-24T17:49:34Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Understanding Reconstruction Attacks with the Neural Tangent Kernel and
Dataset Distillation [110.61853418925219]
We build a stronger version of the dataset reconstruction attack and show how it can provably recover the emphentire training set in the infinite width regime.
We show that both theoretically and empirically, reconstructed images tend to "outliers" in the dataset.
These reconstruction attacks can be used for textitdataset distillation, that is, we can retrain on reconstructed images and obtain high predictive accuracy.
arXiv Detail & Related papers (2023-02-02T21:41:59Z) - Defending against Reconstruction Attacks with R\'enyi Differential
Privacy [72.1188520352079]
Reconstruction attacks allow an adversary to regenerate data samples of the training set using access to only a trained model.
Differential privacy is a known solution to such attacks, but is often used with a relatively large privacy budget.
We show that, for a same mechanism, we can derive privacy guarantees for reconstruction attacks that are better than the traditional ones from the literature.
arXiv Detail & Related papers (2022-02-15T18:09:30Z) - Federated Deep Learning with Bayesian Privacy [28.99404058773532]
Federated learning (FL) aims to protect data privacy by cooperatively learning a model without sharing private data among users.
Homomorphic encryption (HE) based methods provide secure privacy protections but suffer from extremely high computational and communication overheads.
Deep learning with Differential Privacy (DP) was implemented as a practical learning algorithm at a manageable cost in complexity.
arXiv Detail & Related papers (2021-09-27T12:48:40Z) - A Shuffling Framework for Local Differential Privacy [40.92785300658643]
ldp deployments are vulnerable to inference attacks as an adversary can link the noisy responses to their identity.
An alternative model, shuffle DP, prevents this by shuffling the noisy responses uniformly at random.
We show that systematic shuffling of the noisy responses can thwart specific inference attacks while retaining some meaningful data learnability.
arXiv Detail & Related papers (2021-06-11T20:36:23Z) - Privacy-Preserving Federated Learning on Partitioned Attributes [6.661716208346423]
Federated learning empowers collaborative training without exposing local data or models.
We introduce an adversarial learning based procedure which tunes a local model to release privacy-preserving intermediate representations.
To alleviate the accuracy decline, we propose a defense method based on the forward-backward splitting algorithm.
arXiv Detail & Related papers (2021-04-29T14:49:14Z) - Fidel: Reconstructing Private Training Samples from Weight Updates in
Federated Learning [0.0]
We evaluate a novel attack method within regular federated learning which we name the First Dense Layer Attack (Fidel)
We show how to recover on average twenty out of thirty private data samples from a client's model update employing a fully connected neural network.
arXiv Detail & Related papers (2021-01-01T04:00:23Z) - Compression Boosts Differentially Private Federated Learning [0.7742297876120562]
Federated learning allows distributed entities to train a common model collaboratively without sharing their own data.
It remains vulnerable to various inference and reconstruction attacks where a malicious entity can learn private information about the participants' training data from the captured gradients.
We show experimentally, using 2 datasets, that our privacy-preserving proposal can reduce the communication costs by up to 95% with only a negligible performance penalty compared to traditional non-private federated learning schemes.
arXiv Detail & Related papers (2020-11-10T13:11:03Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.