Fingerprint Attack: Client De-Anonymization in Federated Learning
- URL: http://arxiv.org/abs/2310.05960v1
- Date: Tue, 12 Sep 2023 11:10:30 GMT
- Title: Fingerprint Attack: Client De-Anonymization in Federated Learning
- Authors: Qiongkai Xu and Trevor Cohn and Olga Ohrimenko
- Abstract summary: Federated Learning allows collaborative training without data sharing in settings where participants do not trust the central server and one another.
This paper seeks to examine whether such a defense is adequate to guarantee anonymity, by proposing a novel fingerprinting attack over gradients sent by the participants to the server.
- Score: 44.77305865061609
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning allows collaborative training without data sharing in
settings where participants do not trust the central server and one another.
Privacy can be further improved by ensuring that communication between the
participants and the server is anonymized through a shuffle; decoupling the
participant identity from their data. This paper seeks to examine whether such
a defense is adequate to guarantee anonymity, by proposing a novel
fingerprinting attack over gradients sent by the participants to the server. We
show that clustering of gradients can easily break the anonymization in an
empirical study of learning federated language models on two language corpora.
We then show that training with differential privacy can provide a practical
defense against our fingerprint attack.
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - AnoFel: Supporting Anonymity for Privacy-Preserving Federated Learning [4.086517346598676]
Federated learning enables users to collaboratively train a machine learning model over their private datasets.
Secure aggregation protocols are employed to mitigate information leakage about the local datasets.
This setup, however, still leaks the participation of a user in a training iteration, which can also be sensitive.
We introduce AnoFel, the first framework to support private and anonymous dynamic participation in federated learning.
arXiv Detail & Related papers (2023-06-12T02:25:44Z) - Secure Aggregation Is Not All You Need: Mitigating Privacy Attacks with
Noise Tolerance in Federated Learning [0.0]
Federated learning aims to preserve data privacy while creating AI models.
Current approaches rely heavily on secure aggregation protocols to preserve data privacy.
We investigate vulnerabilities to secure aggregation that could arise if the server is fully malicious.
arXiv Detail & Related papers (2022-11-10T05:13:08Z) - Protecting Split Learning by Potential Energy Loss [70.81375125791979]
We focus on the privacy leakage from the forward embeddings of split learning.
We propose the potential energy loss to make the forward embeddings become more 'complicated'
arXiv Detail & Related papers (2022-10-18T06:21:11Z) - Network-Level Adversaries in Federated Learning [21.222645649379672]
We study the impact of network-level adversaries on training federated learning models.
We show that attackers dropping the network traffic from carefully selected clients can significantly decrease model accuracy on a target population.
We develop a server-side defense which mitigates the impact of our attacks by identifying and up-sampling clients likely to positively contribute towards target accuracy.
arXiv Detail & Related papers (2022-08-27T02:42:04Z) - Efficient and Privacy Preserving Group Signature for Federated Learning [2.121963121603413]
Federated Learning (FL) is a Machine Learning (ML) technique that aims to reduce the threats to user data privacy.
This paper proposes an efficient and privacy-preserving protocol for FL based on group signature.
arXiv Detail & Related papers (2022-07-12T04:12:10Z) - Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets [53.866927712193416]
We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak private details belonging to other parties.
Our attacks are effective across membership inference, attribute inference, and data extraction.
Our results cast doubts on the relevance of cryptographic privacy guarantees in multiparty protocols for machine learning.
arXiv Detail & Related papers (2022-03-31T18:06:28Z) - UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label
Inference Attacks Against Split Learning [0.0]
Split learning framework aims to split up the model among the client and the server.
We show that split learning paradigm can pose serious security risks and provide no more than a false sense of security.
arXiv Detail & Related papers (2021-08-20T07:39:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.