Universal Adversarial Backdoor Attacks to Fool Vertical Federated
Learning in Cloud-Edge Collaboration
- URL: http://arxiv.org/abs/2304.11432v1
- Date: Sat, 22 Apr 2023 15:31:15 GMT
- Title: Universal Adversarial Backdoor Attacks to Fool Vertical Federated
Learning in Cloud-Edge Collaboration
- Authors: Peng Chen, Xin Du, Zhihui Lu and Hongfeng Chai
- Abstract summary: This paper investigates the vulnerability of vertical federated learning (VFL) in the context of binary classification tasks.
We introduce a universal adversarial backdoor (UAB) attack to poison the predictions of VFL.
Our approach surpasses existing state-of-the-art methods, achieving up to 100% backdoor task performance.
- Score: 13.067285306737675
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vertical federated learning (VFL) is a cloud-edge collaboration paradigm that
enables edge nodes, comprising resource-constrained Internet of Things (IoT)
devices, to cooperatively train artificial intelligence (AI) models while
retaining their data locally. This paradigm facilitates improved privacy and
security for edges and IoT devices, making VFL an essential component of
Artificial Intelligence of Things (AIoT) systems. Nevertheless, the partitioned
structure of VFL can be exploited by adversaries to inject a backdoor, enabling
them to manipulate the VFL predictions. In this paper, we aim to investigate
the vulnerability of VFL in the context of binary classification tasks. To this
end, we define a threat model for backdoor attacks in VFL and introduce a
universal adversarial backdoor (UAB) attack to poison the predictions of VFL.
The UAB attack, consisting of universal trigger generation and clean-label
backdoor injection, is incorporated during the VFL training at specific
iterations. This is achieved by alternately optimizing the universal trigger
and model parameters of VFL sub-problems. Our work distinguishes itself from
existing studies on designing backdoor attacks for VFL, as those require the
knowledge of auxiliary information not accessible within the split VFL
architecture. In contrast, our approach does not necessitate any additional
data to execute the attack. On the LendingClub and Zhongyuan datasets, our
approach surpasses existing state-of-the-art methods, achieving up to 100\%
backdoor task performance while maintaining the main task performance. Our
results in this paper make a major advance to revealing the hidden backdoor
risks of VFL, hence paving the way for the future development of secure AIoT.
Related papers
- VFLIP: A Backdoor Defense for Vertical Federated Learning via Identification and Purification [2.598981024199416]
We present the first backdoor defense, called VFLIP, specialized for Vertical Federated Learning (VFL)
VFLIP employs the identification and purification techniques that operate at the inference stage, consequently improving the robustness against backdoor attacks to a great extent.
We conduct extensive experiments on CIFAR10, CINIC10, Imagenette, NUS-WIDE, and BankMarketing to demonstrate that VFLIP can effectively mitigate backdoor attacks in VFL.
arXiv Detail & Related papers (2024-08-28T07:31:32Z) - UIFV: Data Reconstruction Attack in Vertical Federated Learning [5.404398887781436]
Vertical Federated Learning (VFL) facilitates collaborative machine learning without the need for participants to share raw private data.
Recent studies have revealed privacy risks where adversaries might reconstruct sensitive features through data leakage during the learning process.
Our work exposes severe privacy vulnerabilities within VFL systems that pose real threats to practical VFL applications.
arXiv Detail & Related papers (2024-06-18T13:18:52Z) - Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning [31.386836775526685]
We propose textitPFedBA, a stealthy and effective backdoor attack strategy applicable to PFL systems.
Our study sheds light on the subtle yet potent backdoor threats to PFL systems, urging the community to bolster defenses against emerging backdoor challenges.
arXiv Detail & Related papers (2024-06-10T12:14:05Z) - BadVFL: Backdoor Attacks in Vertical Federated Learning [22.71527711053385]
Federated learning (FL) enables multiple parties to collaboratively train a machine learning model without sharing their data.
In this paper, we focus on robustness in VFL, in particular, on backdoor attacks.
We present a first-of-its-kind clean-label backdoor attack in VFL, which consists of two phases: a label inference and a backdoor phase.
arXiv Detail & Related papers (2023-04-18T09:22:32Z) - Revisiting Personalized Federated Learning: Robustness Against Backdoor
Attacks [53.81129518924231]
We conduct the first study of backdoor attacks in the pFL framework.
We show that pFL methods with partial model-sharing can significantly boost robustness against backdoor attacks.
We propose a lightweight defense method, Simple-Tuning, which empirically improves defense performance against backdoor attacks.
arXiv Detail & Related papers (2023-02-03T11:58:14Z) - Backdoor Attacks in Peer-to-Peer Federated Learning [11.235386862864397]
Peer-to-Peer Federated Learning (P2PFL) offer advantages in terms of both privacy and reliability.
We propose new backdoor attacks for P2PFL that leverage structural graph properties to select the malicious nodes, and achieve high attack success.
arXiv Detail & Related papers (2023-01-23T21:49:28Z) - Desirable Companion for Vertical Federated Learning: New Zeroth-Order
Gradient Based Algorithm [140.25480610981504]
A complete list of metrics to evaluate VFL algorithms should include model applicability, privacy, communication, and computation efficiency.
We propose a novel VFL framework with black-box scalability, which is inseparably inseparably scalable.
arXiv Detail & Related papers (2022-03-19T13:55:47Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - CRFL: Certifiably Robust Federated Learning against Backdoor Attacks [59.61565692464579]
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
arXiv Detail & Related papers (2021-06-15T16:50:54Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z) - Wireless Communications for Collaborative Federated Learning [160.82696473996566]
Internet of Things (IoT) devices may not be able to transmit their collected data to a central controller for training machine learning models.
Google's seminal FL algorithm requires all devices to be directly connected with a central controller.
This paper introduces a novel FL framework, called collaborative FL (CFL), which enables edge devices to implement FL with less reliance on a central controller.
arXiv Detail & Related papers (2020-06-03T20:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.