Fidel: Reconstructing Private Training Samples from Weight Updates in
Federated Learning
- URL: http://arxiv.org/abs/2101.00159v1
- Date: Fri, 1 Jan 2021 04:00:23 GMT
- Title: Fidel: Reconstructing Private Training Samples from Weight Updates in
Federated Learning
- Authors: David Enthoven and Zaid Al-Ars
- Abstract summary: We evaluate a novel attack method within regular federated learning which we name the First Dense Layer Attack (Fidel)
We show how to recover on average twenty out of thirty private data samples from a client's model update employing a fully connected neural network.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the increasing number of data collectors such as smartphones, immense
amounts of data are available. Federated learning was developed to allow for
distributed learning on a massive scale whilst still protecting each users'
privacy. This privacy is claimed by the notion that the centralized server does
not have any access to a client's data, solely the client's model update. In
this paper, we evaluate a novel attack method within regular federated learning
which we name the First Dense Layer Attack (Fidel). The methodology of using
this attack is discussed, and as a proof of viability we show how this attack
method can be used to great effect for densely connected networks and
convolutional neural networks. We evaluate some key design decisions and show
that the usage of ReLu and Dropout are detrimental to the privacy of a client's
local dataset. We show how to recover on average twenty out of thirty private
data samples from a client's model update employing a fully connected neural
network with very little computational resources required. Similarly, we show
that over thirteen out of twenty samples can be recovered from a convolutional
neural network update.
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - FedBayes: A Zero-Trust Federated Learning Aggregation to Defend Against
Adversarial Attacks [1.689369173057502]
Federated learning has created a decentralized method to train a machine learning model without needing direct access to client data.
malicious clients are able to corrupt the global model and degrade performance across all clients within a federation.
Our novel aggregation method, FedBayes, mitigates the effect of a malicious client by calculating the probabilities of a client's model weights.
arXiv Detail & Related papers (2023-12-04T21:37:50Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Network-Level Adversaries in Federated Learning [21.222645649379672]
We study the impact of network-level adversaries on training federated learning models.
We show that attackers dropping the network traffic from carefully selected clients can significantly decrease model accuracy on a target population.
We develop a server-side defense which mitigates the impact of our attacks by identifying and up-sampling clients likely to positively contribute towards target accuracy.
arXiv Detail & Related papers (2022-08-27T02:42:04Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - When the Curious Abandon Honesty: Federated Learning Is Not Private [36.95590214441999]
In federated learning (FL), data does not leave personal devices when they are jointly training a machine learning model.
We show a novel data reconstruction attack which allows an active and dishonest central party to efficiently extract user data from the received gradients.
arXiv Detail & Related papers (2021-12-06T10:37:03Z) - UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label
Inference Attacks Against Split Learning [0.0]
Split learning framework aims to split up the model among the client and the server.
We show that split learning paradigm can pose serious security risks and provide no more than a false sense of security.
arXiv Detail & Related papers (2021-08-20T07:39:16Z) - Adversarial Robustness through Bias Variance Decomposition: A New
Perspective for Federated Learning [41.525434598682764]
Federated learning learns a neural network model by aggregating the knowledge from a group of distributed clients under the privacy-preserving constraint.
We show that this paradigm might inherit the adversarial vulnerability of the centralized neural network.
We propose an adversarially robust federated learning framework, named Fed_BVA, with improved server and client update mechanisms.
arXiv Detail & Related papers (2020-09-18T18:58:25Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.