FLTracer: Accurate Poisoning Attack Provenance in Federated Learning
- URL: http://arxiv.org/abs/2310.13424v1
- Date: Fri, 20 Oct 2023 11:24:38 GMT
- Title: FLTracer: Accurate Poisoning Attack Provenance in Federated Learning
- Authors: Xinyu Zhang, Qingyu Liu, Zhongjie Ba, Yuan Hong, Tianhang Zheng, Feng
Lin, Li Lu, and Kui Ren
- Abstract summary: Federated Learning (FL) is a promising distributed learning approach that enables multiple clients to collaboratively train a shared global model.
Recent studies show that FL is vulnerable to various poisoning attacks, which can degrade the performance of global models or introduce backdoors into them.
We propose FLTracer, the first FL attack framework to accurately detect various attacks and trace the attack time, objective, type, and poisoned location of updates.
- Score: 38.47921452675418
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a promising distributed learning approach that
enables multiple clients to collaboratively train a shared global model.
However, recent studies show that FL is vulnerable to various poisoning
attacks, which can degrade the performance of global models or introduce
backdoors into them. In this paper, we first conduct a comprehensive study on
prior FL attacks and detection methods. The results show that all existing
detection methods are only effective against limited and specific attacks. Most
detection methods suffer from high false positives, which lead to significant
performance degradation, especially in not independent and identically
distributed (non-IID) settings. To address these issues, we propose FLTracer,
the first FL attack provenance framework to accurately detect various attacks
and trace the attack time, objective, type, and poisoned location of updates.
Different from existing methodologies that rely solely on cross-client anomaly
detection, we propose a Kalman filter-based cross-round detection to identify
adversaries by seeking the behavior changes before and after the attack. Thus,
this makes it resilient to data heterogeneity and is effective even in non-IID
settings. To further improve the accuracy of our detection method, we employ
four novel features and capture their anomalies with the joint decisions.
Extensive evaluations show that FLTracer achieves an average true positive rate
of over $96.88\%$ at an average false positive rate of less than $2.67\%$,
significantly outperforming SOTA detection methods. \footnote{Code is available
at \url{https://github.com/Eyr3/FLTracer}.}
Related papers
- Poisoning with A Pill: Circumventing Detection in Federated Learning [33.915489514978084]
This paper proposes a generic and attack-agnostic augmentation approach designed to enhance the effectiveness and stealthiness of existing FL poisoning attacks against detection in FL.
Specifically, we employ a three-stage methodology that strategically constructs, generates, and injects poison into a pill during the FL training, named as pill construction, pill poisoning, and pill injection accordingly.
arXiv Detail & Related papers (2024-07-22T05:34:47Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Backdoor Defense in Federated Learning Using Differential Testing and
Outlier Detection [24.562359531692504]
We propose DifFense, an automated defense framework to protect an FL system from backdoor attacks.
Our detection method reduces the average backdoor accuracy of the global model to below 4% and achieves a false negative rate of zero.
arXiv Detail & Related papers (2022-02-21T17:13:03Z) - BaFFLe: Backdoor detection via Feedback-based Federated Learning [3.6895394817068357]
We propose Backdoor detection via Feedback-based Federated Learning (BAFFLE)
We show that BAFFLE reliably detects state-of-the-art backdoor attacks with a detection accuracy of 100% and a false-positive rate below 5%.
arXiv Detail & Related papers (2020-11-04T07:44:51Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z) - Miss the Point: Targeted Adversarial Attack on Multiple Landmark
Detection [29.83857022733448]
This paper is the first to study how fragile a CNN-based model on multiple landmark detection to adversarial perturbations.
We propose a novel Adaptive Targeted Iterative FGSM attack against the state-of-the-art models in multiple landmark detection.
arXiv Detail & Related papers (2020-07-10T07:58:35Z) - Detection as Regression: Certified Object Detection by Median Smoothing [50.89591634725045]
This work is motivated by recent progress on certified classification by randomized smoothing.
We obtain the first model-agnostic, training-free, and certified defense for object detection against $ell$-bounded attacks.
arXiv Detail & Related papers (2020-07-07T18:40:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.