Vertical Federated Learning for Effectiveness, Security, Applicability: A Survey
- URL: http://arxiv.org/abs/2405.17495v2
- Date: Tue, 4 Jun 2024 13:04:53 GMT
- Title: Vertical Federated Learning for Effectiveness, Security, Applicability: A Survey
- Authors: Mang Ye, Wei Shen, Bo Du, Eduard Snezhko, Vassili Kovalev, Pong C. Yuen,
- Abstract summary: Vertical Federated Learning (VFL) is a privacy-preserving distributed learning paradigm.
Recent research has shown promising results addressing various challenges in VFL.
This survey offers a systematic overview of recent developments.
- Score: 67.48187503803847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vertical Federated Learning (VFL) is a privacy-preserving distributed learning paradigm where different parties collaboratively learn models using partitioned features of shared samples, without leaking private data. Recent research has shown promising results addressing various challenges in VFL, highlighting its potential for practical applications in cross-domain collaboration. However, the corresponding research is scattered and lacks organization. To advance VFL research, this survey offers a systematic overview of recent developments. First, we provide a history and background introduction, along with a summary of the general training protocol of VFL. We then revisit the taxonomy in recent reviews and analyze limitations in-depth. For a comprehensive and structured discussion, we synthesize recent research from three fundamental perspectives: effectiveness, security, and applicability. Finally, we discuss several critical future research directions in VFL, which will facilitate the developments in this field. We provide a collection of research lists and periodically update them at https://github.com/shentt67/VFL_Survey.
Related papers
- A Survey on Contribution Evaluation in Vertical Federated Learning [26.32678862011122]
Vertical Federated Learning (VFL) has emerged as a critical approach in machine learning to address privacy concerns.
This paper provides a review of contribution evaluation in VFL.
We explore various tasks in VFL that involving contribution evaluation and analyze their required evaluation properties.
arXiv Detail & Related papers (2024-05-03T06:32:07Z) - A Survey of Privacy Threats and Defense in Vertical Federated Learning:
From Model Life Cycle Perspective [31.19776505014808]
We conduct the first comprehensive survey of the state-of-the-art in privacy attacks and defenses in Vertical Federated Learning.
We provide for both attacks and defenses, based on their characterizations, and discuss open challenges and future research directions.
arXiv Detail & Related papers (2024-02-06T04:22:44Z) - Continual Learning with Pre-Trained Models: A Survey [61.97613090666247]
Continual Learning aims to overcome the catastrophic forgetting of former knowledge when learning new ones.
This paper presents a comprehensive survey of the latest advancements in PTM-based CL.
arXiv Detail & Related papers (2024-01-29T18:27:52Z) - Federated Learning for Generalization, Robustness, Fairness: A Survey
and Benchmark [55.898771405172155]
Federated learning has emerged as a promising paradigm for privacy-preserving collaboration among different parties.
We provide a systematic overview of the important and recent developments of research on federated learning.
arXiv Detail & Related papers (2023-11-12T06:32:30Z) - A Survey of Federated Unlearning: A Taxonomy, Challenges and Future
Directions [71.16718184611673]
The evolution of privacy-preserving Federated Learning (FL) has led to an increasing demand for implementing the right to be forgotten.
The implementation of selective forgetting is particularly challenging in FL due to its decentralized nature.
Federated Unlearning (FU) emerges as a strategic solution to address the increasing need for data privacy.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - VFLAIR: A Research Library and Benchmark for Vertical Federated Learning [14.878602173713686]
Vertical Learning (VFL) has emerged as a collaborative training paradigm that allows participants with different features of the same group of users to accomplish cooperative training without exposing their raw data or model parameters.
VFL has gained significant attention for its research potential and real-world applications in recent years, but still faces substantial challenges, such as in defending various kinds of data inference and backdoor attacks.
We present an Federated and lightweight VFL framework VFLAIR, which supports VFL training with a variety of models, datasets and protocols, along with standardized modules for comprehensive evaluations of attacks and defense strategies.
arXiv Detail & Related papers (2023-10-15T13:18:31Z) - Heterogeneous Federated Learning: State-of-the-art and Research
Challenges [117.77132819796105]
Heterogeneous Federated Learning (HFL) is much more challenging and corresponding solutions are diverse and complex.
New advances in HFL are reviewed and a new taxonomy of existing HFL methods is proposed.
Several critical and promising future research directions in HFL are discussed.
arXiv Detail & Related papers (2023-07-20T06:32:14Z) - A Survey on Vertical Federated Learning: From a Layered Perspective [21.639062199459925]
In this paper, we investigate the current work of vertical federated learning (VFL) from a layered perspective.
We design a novel MOSP tree taxonomy to analyze the core component of VFL, i.e., secure vertical federated machine learning algorithm.
Our taxonomy considers four dimensions, i.e., machine learning model (M), protection object (O), security model (S), and privacy-preserving protocol (P)
arXiv Detail & Related papers (2023-04-04T14:33:30Z) - Vertical Federated Learning: Concepts, Advances and Challenges [18.38260017835129]
We review the concept and algorithms of Vertical Federated Learning (VFL)
We provide an exhaustive categorization for VFL settings and privacy-preserving protocols.
We propose a unified framework, termed VFLow, which considers the VFL problem under communication, computation, privacy, as well as effectiveness and fairness constraints.
arXiv Detail & Related papers (2022-11-23T10:00:06Z) - FedNLP: A Research Platform for Federated Learning in Natural Language
Processing [55.01246123092445]
We present the FedNLP, a research platform for federated learning in NLP.
FedNLP supports various popular task formulations in NLP such as text classification, sequence tagging, question answering, seq2seq generation, and language modeling.
Preliminary experiments with FedNLP reveal that there exists a large performance gap between learning on decentralized and centralized datasets.
arXiv Detail & Related papers (2021-04-18T11:04:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.