Collusion Resistant Federated Learning with Oblivious Distributed
Differential Privacy
- URL: http://arxiv.org/abs/2202.09897v1
- Date: Sun, 20 Feb 2022 19:52:53 GMT
- Title: Collusion Resistant Federated Learning with Oblivious Distributed
Differential Privacy
- Authors: David Byrd, Vaikkunth Mugunthan, Antigoni Polychroniadou, Tucker
Hybinette Balch
- Abstract summary: Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model.
We present an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion.
We conclude with empirical analysis of the protocol's execution speed, learning accuracy, and privacy performance on two data sets.
- Score: 4.951247283741297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy-preserving federated learning enables a population of distributed
clients to jointly learn a shared model while keeping client training data
private, even from an untrusted server. Prior works do not provide efficient
solutions that protect against collusion attacks in which parties collaborate
to expose an honest client's model parameters. We present an efficient
mechanism based on oblivious distributed differential privacy that is the first
to protect against such client collusion, including the "Sybil" attack in which
a server preferentially selects compromised devices or simulates fake devices.
We leverage the novel privacy mechanism to construct a secure federated
learning protocol and prove the security of that protocol. We conclude with
empirical analysis of the protocol's execution speed, learning accuracy, and
privacy performance on two data sets within a realistic simulation of 5,000
distributed network clients.
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - Robust and Actively Secure Serverless Collaborative Learning [48.01929996757643]
Collaborative machine learning (ML) is widely used to enable institutions to learn better models from distributed data.
While collaborative approaches to learning intuitively protect user data, they remain vulnerable to either the server, the clients, or both.
We propose a peer-to-peer (P2P) learning scheme that is secure against malicious servers and robust to malicious clients.
arXiv Detail & Related papers (2023-10-25T14:43:03Z) - Blockchain-based Optimized Client Selection and Privacy Preserved
Framework for Federated Learning [2.4201849657206496]
Federated learning is a distributed mechanism that trained large-scale neural network models with the participation of multiple clients.
With this feature, federated learning is considered a secure solution for data privacy issues.
We proposed the blockchain-based optimized client selection and privacy-preserved framework.
arXiv Detail & Related papers (2023-07-25T01:35:51Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Secure Aggregation Is Not All You Need: Mitigating Privacy Attacks with
Noise Tolerance in Federated Learning [0.0]
Federated learning aims to preserve data privacy while creating AI models.
Current approaches rely heavily on secure aggregation protocols to preserve data privacy.
We investigate vulnerabilities to secure aggregation that could arise if the server is fully malicious.
arXiv Detail & Related papers (2022-11-10T05:13:08Z) - PRECAD: Privacy-Preserving and Robust Federated Learning via
Crypto-Aided Differential Privacy [14.678119872268198]
Federated Learning (FL) allows multiple participating clients to train machine learning models collaboratively by keeping their datasets local and only exchanging model updates.
Existing FL protocol designs have been shown to be vulnerable to attacks that aim to compromise data privacy and/or model robustness.
We develop a framework called PRECAD, which simultaneously achieves differential privacy (DP) and enhances robustness against model poisoning attacks with the help of cryptography.
arXiv Detail & Related papers (2021-10-22T04:08:42Z) - PRICURE: Privacy-Preserving Collaborative Inference in a Multi-Party
Setting [3.822543555265593]
This paper presents PRICURE, a system that combines complementary strengths of secure multi-party computation and differential privacy.
PRICURE enables privacy-preserving collaborative prediction among multiple model owners.
We evaluate PRICURE on neural networks across four datasets including benchmark medical image classification datasets.
arXiv Detail & Related papers (2021-02-19T05:55:53Z) - Efficient Sparse Secure Aggregation for Federated Learning [0.20052993723676896]
We adapt compression-based federated techniques to additive secret sharing, leading to an efficient secure aggregation protocol.
We prove its privacy against malicious adversaries and its correctness in the semi-honest setting.
Compared to prior works on secure aggregation, our protocol has a lower communication and adaptable costs for a similar accuracy.
arXiv Detail & Related papers (2020-07-29T14:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.