Using Anomaly Detection to Detect Poisoning Attacks in Federated
Learning Applications
- URL: http://arxiv.org/abs/2207.08486v2
- Date: Tue, 9 May 2023 13:30:46 GMT
- Title: Using Anomaly Detection to Detect Poisoning Attacks in Federated
Learning Applications
- Authors: Ali Raza, Shujun Li, Kim-Phuc Tran, and Ludovic Koehl
- Abstract summary: Adversarial attacks such as poisoning attacks have attracted the attention of many machine learning researchers.
Traditionally, poisoning attacks attempt to inject adversarial training data in order to manipulate the trained model.
In federated learning (FL), data poisoning attacks can be generalized to model poisoning attacks, which cannot be detected by simpler methods due to the lack of access to local training data by the detector.
We propose a novel framework for detecting poisoning attacks in FL, which employs a reference model based on a public dataset and an auditor model to detect malicious updates.
- Score: 2.978389704820221
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks such as poisoning attacks have attracted the attention of
many machine learning researchers. Traditionally, poisoning attacks attempt to
inject adversarial training data in order to manipulate the trained model. In
federated learning (FL), data poisoning attacks can be generalized to model
poisoning attacks, which cannot be detected by simpler methods due to the lack
of access to local training data by the detector. State-of-the-art poisoning
attack detection methods for FL have various weaknesses, e.g., the number of
attackers has to be known or not high enough, working with i.i.d. data only,
and high computational complexity. To overcome above weaknesses, we propose a
novel framework for detecting poisoning attacks in FL, which employs a
reference model based on a public dataset and an auditor model to detect
malicious updates. We implemented a detector based on the proposed framework
and using a one-class support vector machine (OC-SVM), which reaches the lowest
possible computational complexity O(K) where K is the number of clients. We
evaluated our detector's performance against state-of-the-art (SOTA) poisoning
attacks for two typical applications of FL: electrocardiograph (ECG)
classification and human activity recognition (HAR). Our experimental results
validated the performance of our detector over other SOTA detection methods.
Related papers
- Unlearnable Examples Detection via Iterative Filtering [84.59070204221366]
Deep neural networks are proven to be vulnerable to data poisoning attacks.
It is quite beneficial and challenging to detect poisoned samples from a mixed dataset.
We propose an Iterative Filtering approach for UEs identification.
arXiv Detail & Related papers (2024-08-15T13:26:13Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Poison is Not Traceless: Fully-Agnostic Detection of Poisoning Attacks [4.064462548421468]
This paper presents a novel fully-agnostic framework, DIVA, that detects attacks solely relying on analyzing the potentially poisoned data set.
For evaluation purposes, in this paper, we test DIVA on label-flipping attacks.
arXiv Detail & Related papers (2023-10-24T22:27:44Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - AntidoteRT: Run-time Detection and Correction of Poison Attacks on
Neural Networks [18.461079157949698]
backdoor poisoning attacks against image classification networks.
We propose lightweight automated detection and correction techniques against poisoning attacks.
Our technique outperforms existing defenses such as NeuralCleanse and STRIP on popular benchmarks.
arXiv Detail & Related papers (2022-01-31T23:42:32Z) - DeepPoison: Feature Transfer Based Stealthy Poisoning Attack [2.1445455835823624]
DeepPoison is a novel adversarial network of one generator and two discriminators.
DeepPoison can achieve a state-of-the-art attack success rate, as high as 91.74%.
arXiv Detail & Related papers (2021-01-06T15:45:36Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based
Anomaly Detectors to Adversarial Poisoning Attacks [26.09388179354751]
We present the first study focused on poisoning attacks on online-trained autoencoder-based attack detectors.
We show that the proposed algorithms can generate poison samples that cause the target attack to go undetected by the autoencoder detector.
This finding suggests that neural network-based attack detectors used in the cyber-physical domain are more robust to poisoning than in other problem domains.
arXiv Detail & Related papers (2020-02-07T12:41:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.