ADI: Adversarial Dominating Inputs in Vertical Federated Learning
Systems
- URL: http://arxiv.org/abs/2201.02775v3
- Date: Tue, 11 Apr 2023 21:48:44 GMT
- Title: ADI: Adversarial Dominating Inputs in Vertical Federated Learning
Systems
- Authors: Qi Pang, Yuanyuan Yuan, Shuai Wang, Wenting Zheng
- Abstract summary: We find that certain inputs of a participant, named adversarial dominating inputs (perturbs), can dominate the joint inference towards the direction of the adversary's will.
We propose gradient-based methods to synthesize ADIs of various formats and exploit common VFL systems.
Our study reveals new VFL attack opportunities, promoting the identification of unknown threats before breaches and building more secure VFL systems.
- Score: 13.081925156083821
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vertical federated learning (VFL) system has recently become prominent as a
concept to process data distributed across many individual sources without the
need to centralize it. Multiple participants collaboratively train models based
on their local data in a privacy-aware manner. To date, VFL has become a de
facto solution to securely learn a model among organizations, allowing
knowledge to be shared without compromising privacy of any individuals. Despite
the prosperous development of VFL systems, we find that certain inputs of a
participant, named adversarial dominating inputs (ADIs), can dominate the joint
inference towards the direction of the adversary's will and force other
(victim) participants to make negligible contributions, losing rewards that are
usually offered regarding the importance of their contributions in federated
learning scenarios. We conduct a systematic study on ADIs by first proving
their existence in typical VFL systems. We then propose gradient-based methods
to synthesize ADIs of various formats and exploit common VFL systems. We
further launch greybox fuzz testing, guided by the saliency score of ``victim''
participants, to perturb adversary-controlled inputs and systematically explore
the VFL attack surface in a privacy-preserving manner. We conduct an in-depth
study on the influence of critical parameters and settings in synthesizing
ADIs. Our study reveals new VFL attack opportunities, promoting the
identification of unknown threats before breaches and building more secure VFL
systems.
Related papers
- Vertical Federated Learning for Effectiveness, Security, Applicability: A Survey [67.48187503803847]
Vertical Federated Learning (VFL) is a privacy-preserving distributed learning paradigm.
Recent research has shown promising results addressing various challenges in VFL.
This survey offers a systematic overview of recent developments.
arXiv Detail & Related papers (2024-05-25T16:05:06Z) - A Survey of Privacy Threats and Defense in Vertical Federated Learning:
From Model Life Cycle Perspective [31.19776505014808]
We conduct the first comprehensive survey of the state-of-the-art in privacy attacks and defenses in Vertical Federated Learning.
We provide for both attacks and defenses, based on their characterizations, and discuss open challenges and future research directions.
arXiv Detail & Related papers (2024-02-06T04:22:44Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Universal Adversarial Backdoor Attacks to Fool Vertical Federated
Learning in Cloud-Edge Collaboration [13.067285306737675]
This paper investigates the vulnerability of vertical federated learning (VFL) in the context of binary classification tasks.
We introduce a universal adversarial backdoor (UAB) attack to poison the predictions of VFL.
Our approach surpasses existing state-of-the-art methods, achieving up to 100% backdoor task performance.
arXiv Detail & Related papers (2023-04-22T15:31:15Z) - BadVFL: Backdoor Attacks in Vertical Federated Learning [22.71527711053385]
Federated learning (FL) enables multiple parties to collaboratively train a machine learning model without sharing their data.
In this paper, we focus on robustness in VFL, in particular, on backdoor attacks.
We present a first-of-its-kind clean-label backdoor attack in VFL, which consists of two phases: a label inference and a backdoor phase.
arXiv Detail & Related papers (2023-04-18T09:22:32Z) - A Survey of Trustworthy Federated Learning with Perspectives on
Security, Robustness, and Privacy [47.89042524852868]
Federated Learning (FL) stands out as a promising solution for diverse real-world scenarios.
However, challenges around data isolation and privacy threaten the trustworthiness of FL systems.
arXiv Detail & Related papers (2023-02-21T12:52:12Z) - Vertical Federated Learning: Concepts, Advances and Challenges [18.38260017835129]
We review the concept and algorithms of Vertical Federated Learning (VFL)
We provide an exhaustive categorization for VFL settings and privacy-preserving protocols.
We propose a unified framework, termed VFLow, which considers the VFL problem under communication, computation, privacy, as well as effectiveness and fairness constraints.
arXiv Detail & Related papers (2022-11-23T10:00:06Z) - Vertical Federated Learning: Challenges, Methodologies and Experiments [34.4865409422585]
vertical learning (VFL) is capable of constructing a hyper ML model by embracing sub-models from different clients.
In this paper, we discuss key challenges in VFL with effective solutions, and conduct experiments on real-life datasets.
arXiv Detail & Related papers (2022-02-09T06:56:41Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.