Practical and General Backdoor Attacks against Vertical Federated
Learning
- URL: http://arxiv.org/abs/2306.10746v1
- Date: Mon, 19 Jun 2023 07:30:01 GMT
- Title: Practical and General Backdoor Attacks against Vertical Federated
Learning
- Authors: Yuexin Xuan, Xiaojun Chen, Zhendong Zhao, Bisheng Tang, Ye Dong
- Abstract summary: Federated learning (FL) aims to facilitate data collaboration across multiple organizations without exposing data privacy.
BadVFL is a novel and practical approach to inject backdoor triggers into victim models without label information.
BadVFL achieves over 93% attack success rate with only 1% poisoning rate.
- Score: 3.587415228422117
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL), which aims to facilitate data collaboration across
multiple organizations without exposing data privacy, encounters potential
security risks. One serious threat is backdoor attacks, where an attacker
injects a specific trigger into the training dataset to manipulate the model's
prediction. Most existing FL backdoor attacks are based on horizontal federated
learning (HFL), where the data owned by different parties have the same
features. However, compared to HFL, backdoor attacks on vertical federated
learning (VFL), where each party only holds a disjoint subset of features and
the labels are only owned by one party, are rarely studied. The main challenge
of this attack is to allow an attacker without access to the data labels, to
perform an effective attack. To this end, we propose BadVFL, a novel and
practical approach to inject backdoor triggers into victim models without label
information. BadVFL mainly consists of two key steps. First, to address the
challenge of attackers having no knowledge of labels, we introduce a SDD module
that can trace data categories based on gradients. Second, we propose a SDP
module that can improve the attack's effectiveness by enhancing the decision
dependency between the trigger and attack target. Extensive experiments show
that BadVFL supports diverse datasets and models, and achieves over 93% attack
success rate with only 1% poisoning rate.
Related papers
- Bad-PFL: Exploring Backdoor Attacks against Personalized Federated Learning [22.074601909696298]
federated learning (PFL) enables each client to maintain a private personalized model to cater to client-specific knowledge.
Bad-PFL employs features from natural data as our trigger, ensuring its longevity in personalized models.
The large-scale experiments across three benchmark datasets demonstrate the superior performance of our attack against various PFL methods.
arXiv Detail & Related papers (2025-01-22T09:12:16Z) - Cooperative Decentralized Backdoor Attacks on Vertical Federated Learning [22.076364118223324]
We propose a novel backdoor attack on vertical Federated Learning (VFL)
Our label inference model augments variational autoencoders with metric learning, which adversaries can train locally.
Our convergence analysis reveals the impact of backdoor perturbations on VFL indicated by a stationarity gap for the trained model.
arXiv Detail & Related papers (2025-01-16T06:22:35Z) - Just a Simple Transformation is Enough for Data Protection in Vertical Federated Learning [83.90283731845867]
We consider feature reconstruction attacks, a common risk targeting input data compromise.
We show that Federated-based models are resistant to state-of-the-art feature reconstruction attacks.
arXiv Detail & Related papers (2024-12-16T12:02:12Z) - Does Few-shot Learning Suffer from Backdoor Attacks? [63.9864247424967]
We show that few-shot learning can still be vulnerable to backdoor attacks.
Our method demonstrates a high Attack Success Rate (ASR) in FSL tasks with different few-shot learning paradigms.
This study reveals that few-shot learning still suffers from backdoor attacks, and its security should be given attention.
arXiv Detail & Related papers (2023-12-31T06:43:36Z) - One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training [54.622474306336635]
A new weight modification attack called bit flip attack (BFA) was proposed, which exploits memory fault inject techniques.
We propose a training-assisted bit flip attack, in which the adversary is involved in the training stage to build a high-risk model to release.
arXiv Detail & Related papers (2023-08-12T09:34:43Z) - DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning [14.312593000209693]
Federated learning (FL) attempts to train a global model by aggregating local models from distributed devices under the coordination of a central server.
The existence of a large number of heterogeneous devices makes FL vulnerable to various attacks, especially the stealthy backdoor attack.
We propose a new attack model for FL, namely Data-Agnostic Backdoor attack at the Server (DABS), where the server directly modifies the global model to backdoor an FL system.
arXiv Detail & Related papers (2023-05-02T09:04:34Z) - BadVFL: Backdoor Attacks in Vertical Federated Learning [22.71527711053385]
Federated learning (FL) enables multiple parties to collaboratively train a machine learning model without sharing their data.
In this paper, we focus on robustness in VFL, in particular, on backdoor attacks.
We present a first-of-its-kind clean-label backdoor attack in VFL, which consists of two phases: a label inference and a backdoor phase.
arXiv Detail & Related papers (2023-04-18T09:22:32Z) - Revisiting Personalized Federated Learning: Robustness Against Backdoor
Attacks [53.81129518924231]
We conduct the first study of backdoor attacks in the pFL framework.
We show that pFL methods with partial model-sharing can significantly boost robustness against backdoor attacks.
We propose a lightweight defense method, Simple-Tuning, which empirically improves defense performance against backdoor attacks.
arXiv Detail & Related papers (2023-02-03T11:58:14Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Defending Label Inference and Backdoor Attacks in Vertical Federated
Learning [11.319694528089773]
In collaborative learning, curious parities might be honest but are attempting to infer other parties' private data through inference attacks.
In this paper, we show that private labels can be reconstructed from per-sample gradients.
We introduce a novel technique termed confusional autoencoder (CoAE) based on autoencoder and entropy regularization.
arXiv Detail & Related papers (2021-12-10T09:32:09Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.