Vertical Federated Learning: A Structured Literature Review
- URL: http://arxiv.org/abs/2212.00622v2
- Date: Sun, 9 Apr 2023 14:13:23 GMT
- Title: Vertical Federated Learning: A Structured Literature Review
- Authors: Afsana Khan, Marijn ten Thij, Anna Wilbik
- Abstract summary: Federated learning (FL) has emerged as a promising distributed learning paradigm with an added advantage of data privacy.
In this paper, we present a structured literature review discussing the state-of-the-art approaches in VFL.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) has emerged as a promising distributed learning
paradigm with an added advantage of data privacy. With the growing interest in
having collaboration among data owners, FL has gained significant attention of
organizations. The idea of FL is to enable collaborating participants train
machine learning (ML) models on decentralized data without breaching privacy.
In simpler words, federated learning is the approach of ``bringing the model to
the data, instead of bringing the data to the mode''. Federated learning, when
applied to data which is partitioned vertically across participants, is able to
build a complete ML model by combining local models trained only using the data
with distinct features at the local sites. This architecture of FL is referred
to as vertical federated learning (VFL), which differs from the conventional FL
on horizontally partitioned data. As VFL is different from conventional FL, it
comes with its own issues and challenges. In this paper, we present a
structured literature review discussing the state-of-the-art approaches in VFL.
Additionally, the literature review highlights the existing solutions to
challenges in VFL and provides potential research directions in this domain.
Related papers
- De-VertiFL: A Solution for Decentralized Vertical Federated Learning [7.877130417748362]
This work introduces De-VertiFL, a novel solution for training models in a decentralized VFL setting.
De-VertiFL contributes by introducing a new network architecture distribution, an innovative knowledge exchange scheme, and a distributed federated training process.
The results demonstrate that De-VertiFL generally surpasses state-of-the-art methods in F1-score performance, while maintaining a decentralized and privacy-preserving framework.
arXiv Detail & Related papers (2024-10-08T15:31:10Z) - SoK: Challenges and Opportunities in Federated Unlearning [32.0365189539138]
This SoK paper aims to take a deep look at the emphfederated unlearning literature, with the goal of identifying research trends and challenges in this emerging field.
arXiv Detail & Related papers (2024-03-04T19:35:08Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - Vertical Federated Learning: Taxonomies, Threats, and Prospects [22.487434998185773]
Federated learning (FL) is the most popular distributed machine learning technique.
FL can be divided into horizontal federated learning (HFL) and vertical federated learning (VFL)
VFL is more relevant than HFL as different companies hold different features for the same set of customers.
Although VFL is an emerging area of research, it is not well-established compared to HFL.
arXiv Detail & Related papers (2023-02-03T05:13:40Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - Vertical Federated Learning: Challenges, Methodologies and Experiments [34.4865409422585]
vertical learning (VFL) is capable of constructing a hyper ML model by embracing sub-models from different clients.
In this paper, we discuss key challenges in VFL with effective solutions, and conduct experiments on real-life datasets.
arXiv Detail & Related papers (2022-02-09T06:56:41Z) - Towards Personalized Federated Learning [20.586573091790665]
We present a unique taxonomy dividing PFL techniques into data-based and model-based approaches.
We highlight their key ideas, and envision promising future trajectories of research towards new PFL architectural design.
arXiv Detail & Related papers (2021-03-01T02:45:19Z) - A Principled Approach to Data Valuation for Federated Learning [73.19984041333599]
Federated learning (FL) is a popular technique to train machine learning (ML) models on decentralized data sources.
The Shapley value (SV) defines a unique payoff scheme that satisfies many desiderata for a data value notion.
This paper proposes a variant of the SV amenable to FL, which we call the federated Shapley value.
arXiv Detail & Related papers (2020-09-14T04:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.