Data Poisoning Attacks on Federated Machine Learning
- URL: http://arxiv.org/abs/2004.10020v1
- Date: Sun, 19 Apr 2020 03:45:05 GMT
- Title: Data Poisoning Attacks on Federated Machine Learning
- Authors: Gan Sun, Yang Cong (Senior Member, IEEE), Jiahua Dong, Qiang Wang, and
Ji Liu
- Abstract summary: Federated machine learning enables resource constrained node devices to learn a shared model while keeping the training data local.
The communication protocol amongst different nodes could be exploited by attackers to launch data poisoning attacks.
We propose a novel systems-aware optimization method, ATTack on Federated Learning (AT2FL)
- Score: 34.48190607495785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated machine learning which enables resource constrained node devices
(e.g., mobile phones and IoT devices) to learn a shared model while keeping the
training data local, can provide privacy, security and economic benefits by
designing an effective communication protocol. However, the communication
protocol amongst different nodes could be exploited by attackers to launch data
poisoning attacks, which has been demonstrated as a big threat to most machine
learning models. In this paper, we attempt to explore the vulnerability of
federated machine learning. More specifically, we focus on attacking a
federated multi-task learning framework, which is a federated learning
framework via adopting a general multi-task learning framework to handle
statistical challenges. We formulate the problem of computing optimal poisoning
attacks on federated multi-task learning as a bilevel program that is adaptive
to arbitrary choice of target nodes and source attacking nodes. Then we propose
a novel systems-aware optimization method, ATTack on Federated Learning
(AT2FL), which is efficiency to derive the implicit gradients for poisoned
data, and further compute optimal attack strategies in the federated machine
learning. Our work is an earlier study that considers issues of data poisoning
attack for federated learning. To the end, experimental results on real-world
datasets show that federated multi-task learning model is very sensitive to
poisoning attacks, when the attackers either directly poison the target nodes
or indirectly poison the related nodes by exploiting the communication
protocol.
Related papers
- Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Mitigating Data Injection Attacks on Federated Learning [20.24380409762923]
Federated learning is a technique that allows multiple entities to collaboratively train models using their data.
Despite its advantages, federated learning can be susceptible to false data injection attacks.
We propose a novel technique to detect and mitigate data injection attacks on federated learning systems.
arXiv Detail & Related papers (2023-12-04T18:26:31Z) - Federated Learning Based Distributed Localization of False Data
Injection Attacks on Smart Grids [5.705281336771011]
False data injection attack (FDIA) is one of the classes of attacks that target the smart measurement devices by injecting malicious data.
We propose a federated learning-based scheme combined with a hybrid deep neural network architecture.
We validate the proposed architecture by extensive simulations on the IEEE 57, 118, and 300 bus systems and real electricity load data.
arXiv Detail & Related papers (2023-06-17T20:29:55Z) - Network Anomaly Detection Using Federated Learning [0.483420384410068]
We introduce a robust and scalable framework that enables efficient network anomaly detection.
We leverage federated learning, in which multiple participants train a global model jointly.
The proposed method performs better than baseline machine learning techniques on the UNSW-NB15 data set.
arXiv Detail & Related papers (2023-03-13T20:16:30Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Network-Level Adversaries in Federated Learning [21.222645649379672]
We study the impact of network-level adversaries on training federated learning models.
We show that attackers dropping the network traffic from carefully selected clients can significantly decrease model accuracy on a target population.
We develop a server-side defense which mitigates the impact of our attacks by identifying and up-sampling clients likely to positively contribute towards target accuracy.
arXiv Detail & Related papers (2022-08-27T02:42:04Z) - Secure Distributed Training at Scale [65.7538150168154]
Training in presence of peers requires specialized distributed training algorithms with Byzantine tolerance.
We propose a novel protocol for secure (Byzantine-tolerant) decentralized training that emphasizes communication efficiency.
arXiv Detail & Related papers (2021-06-21T17:00:42Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Security of Distributed Machine Learning: A Game-Theoretic Approach to
Design Secure DSVM [31.480769801354413]
This work aims to develop secure distributed algorithms to protect the learning from data poisoning and network attacks.
We establish a game-theoretic framework to capture the conflicting goals of a learner who uses distributed support vector machines (SVMs) and an attacker who is capable of modifying training data and labels.
The numerical results show that distributed SVM is prone to fail in different types of attacks, and their impact has a strong dependence on the network structure and attack capabilities.
arXiv Detail & Related papers (2020-03-08T18:54:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.