Analyzing the vulnerabilities in SplitFed Learning: Assessing the
robustness against Data Poisoning Attacks
- URL: http://arxiv.org/abs/2307.03197v1
- Date: Tue, 4 Jul 2023 00:37:12 GMT
- Title: Analyzing the vulnerabilities in SplitFed Learning: Assessing the
robustness against Data Poisoning Attacks
- Authors: Aysha Thahsin Zahir Ismail, Raj Mani Shukla
- Abstract summary: This research is the earliest attempt to study, analyze and present the impact of data poisoning attacks in SplitFed Learning (SFL)
We propose three kinds of novel attack strategies namely untargeted, targeted and distance-based attacks for SFL.
We test the proposed attack strategies for two different case studies on Electrocardiogram signal classification and automatic handwritten digit recognition.
- Score: 0.45687771576879593
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Distributed Collaborative Machine Learning (DCML) is a potential alternative
to address the privacy concerns associated with centralized machine learning.
The Split learning (SL) and Federated Learning (FL) are the two effective
learning approaches in DCML. Recently there have been an increased interest on
the hybrid of FL and SL known as the SplitFed Learning (SFL). This research is
the earliest attempt to study, analyze and present the impact of data poisoning
attacks in SFL. We propose three kinds of novel attack strategies namely
untargeted, targeted and distance-based attacks for SFL. All the attacks
strategies aim to degrade the performance of the DCML-based classifier. We test
the proposed attack strategies for two different case studies on
Electrocardiogram signal classification and automatic handwritten digit
recognition. A series of attack experiments were conducted by varying the
percentage of malicious clients and the choice of the model split layer between
the clients and the server. The results after the comprehensive analysis of
attack strategies clearly convey that untargeted and distance-based poisoning
attacks have greater impacts in evading the classifier outcomes compared to
targeted attacks in SFL
Related papers
- DMPA: Model Poisoning Attacks on Decentralized Federated Learning for Model Differences [9.077813103456206]
In model poisoning attacks, malicious participants aim to diminish the performance of benign models by creating and disseminating the compromised model.
This paper proposes an innovative model poisoning attack called DMPA.
It calculates the differential characteristics of multiple malicious client models and obtains the most effective poisoning strategy.
arXiv Detail & Related papers (2025-02-07T09:15:38Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - A Survey on Vulnerability of Federated Learning: A Learning Algorithm
Perspective [8.941193384980147]
We focus on threat models targeting the learning process of FL systems.
Defense strategies have evolved from using a singular metric to excluding malicious clients.
Recent endeavors subtly alter the least significant weights in local models to bypass defense measures.
arXiv Detail & Related papers (2023-11-27T18:32:08Z) - DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - You Can Backdoor Personalized Federated Learning [18.91908598410108]
Existing research primarily focuses on backdoor attacks and defenses within the generic federated learning scenario.
We propose a two-pronged attack method, BapFL, which comprises two simple yet effective strategies.
arXiv Detail & Related papers (2023-07-29T12:25:04Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - FL-Defender: Combating Targeted Attacks in Federated Learning [7.152674461313707]
Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers.
FL is vulnerable to targeted poisoning attacks that negatively impact the integrity of the learned model.
We propose textitFL-Defender as a method to combat FL targeted attacks.
arXiv Detail & Related papers (2022-07-02T16:04:46Z) - Semi-Targeted Model Poisoning Attack on Federated Learning via Backward
Error Analysis [15.172954465350667]
Model poisoning attacks on federated learning (FL) intrude in the entire system via compromising an edge model.
We propose the Attacking Distance-aware Attack (ADA) to enhance a poisoning attack by finding the optimized target class in the feature space.
ADA succeeded in increasing the attack performance by 1.8 times in the most challenging case with an attacking frequency of 0.01.
arXiv Detail & Related papers (2022-03-22T11:40:07Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.