Covert Communication Based on the Poisoning Attack in Federated Learning
- URL: http://arxiv.org/abs/2306.01342v1
- Date: Fri, 2 Jun 2023 08:11:32 GMT
- Title: Covert Communication Based on the Poisoning Attack in Federated Learning
- Authors: Junchuan Liang and Rong Wang
- Abstract summary: In deep learning, many methods have been developed for hiding information in models to achieve covert communication.
We propose a novel method for covert communication in federated learning based on the poisoning attack.
Our approach achieves 100% accuracy in covert message transmission between two clients and is shown to be both stealthy and robust.
- Score: 21.596265153097352
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Covert communication has become an important area of research in computer
security. It involves hiding specific information on a carrier for message
transmission and is often used to transmit private data, military secrets, and
even malware. In deep learning, many methods have been developed for hiding
information in models to achieve covert communication. However, these methods
are not applicable to federated learning, where model aggregation invalidates
the exact information embedded in the model by the client. To address this
problem, we propose a novel method for covert communication in federated
learning based on the poisoning attack. Our approach achieves 100% accuracy in
covert message transmission between two clients and is shown to be both
stealthy and robust through extensive experiments. However, existing defense
methods are limited in their effectiveness against our attack scheme,
highlighting the urgent need for new protection methods to be developed. Our
study emphasizes the necessity of research in covert communication and serves
as a foundation for future research in federated learning attacks and defenses.
Related papers
- Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - Defending against Data Poisoning Attacks in Federated Learning via User Elimination [0.0]
This paper introduces a novel framework focused on the strategic elimination of adversarial users within a federated model.
We detect anomalies in the aggregation phase of the Federated Algorithm, by integrating metadata gathered by the local training instances with Differential Privacy techniques.
Our experiments demonstrate the efficacy of our methods, significantly mitigating the risk of data poisoning while maintaining user privacy and model performance.
arXiv Detail & Related papers (2024-04-19T10:36:00Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Security and Privacy Issues and Solutions in Federated Learning for
Digital Healthcare [0.0]
We present vulnerabilities, attacks, and defenses based on the widened attack surfaces of Federated Learning.
We suggest promising new research directions toward a more robust FL.
arXiv Detail & Related papers (2024-01-16T16:07:53Z) - The Model Inversion Eavesdropping Attack in Semantic Communication
Systems [19.385375706864334]
We introduce the model inversion eavesdropping attack (MIEA) to reveal the risk of privacy leaks in the semantic communication system.
MIEA reconstructs the raw message, where both the white-box and black-box settings are considered.
We propose a defense method based on random permutation and substitution to defend against MIEA.
arXiv Detail & Related papers (2023-08-08T14:50:05Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Network-Level Adversaries in Federated Learning [21.222645649379672]
We study the impact of network-level adversaries on training federated learning models.
We show that attackers dropping the network traffic from carefully selected clients can significantly decrease model accuracy on a target population.
We develop a server-side defense which mitigates the impact of our attacks by identifying and up-sampling clients likely to positively contribute towards target accuracy.
arXiv Detail & Related papers (2022-08-27T02:42:04Z) - Certifiably Robust Policy Learning against Adversarial Communication in
Multi-agent Systems [51.6210785955659]
Communication is important in many multi-agent reinforcement learning (MARL) problems for agents to share information and make good decisions.
However, when deploying trained communicative agents in a real-world application where noise and potential attackers exist, the safety of communication-based policies becomes a severe issue that is underexplored.
In this work, we consider an environment with $N$ agents, where the attacker may arbitrarily change the communication from any $CfracN-12$ agents to a victim agent.
arXiv Detail & Related papers (2022-06-21T07:32:18Z) - Homomorphic Encryption and Federated Learning based Privacy-Preserving
CNN Training: COVID-19 Detection Use-Case [0.41998444721319217]
This paper proposes a privacy-preserving federated learning algorithm for medical data using homomorphic encryption.
The proposed algorithm uses a secure multi-party computation protocol to protect the deep learning model from the adversaries.
arXiv Detail & Related papers (2022-04-16T08:38:35Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.