Decoupled Vertical Federated Learning for Practical Training on
Vertically Partitioned Data
- URL: http://arxiv.org/abs/2403.03871v1
- Date: Wed, 6 Mar 2024 17:23:28 GMT
- Title: Decoupled Vertical Federated Learning for Practical Training on
Vertically Partitioned Data
- Authors: Avi Amalanshu, Yash Sirvi, David I. Inouye
- Abstract summary: We propose a blockwise learning approach to Vertical Federated Learning (VFL)
In VFL, a host client owns data labels for each entity and learns a final representation based on intermediate local representations from all guest clients.
We implement DVFL to train split neural networks and show that model performance is comparable to VFL on a variety of classification datasets.
- Score: 9.84489449520821
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vertical Federated Learning (VFL) is an emergent distributed machine learning
paradigm wherein owners of disjoint features of a common set of entities
collaborate to learn a global model without sharing data. In VFL, a host client
owns data labels for each entity and learns a final representation based on
intermediate local representations from all guest clients. Therefore, the host
is a single point of failure and label feedback can be used by malicious guest
clients to infer private features. Requiring all participants to remain active
and trustworthy throughout the entire training process is generally impractical
and altogether infeasible outside of controlled environments. We propose
Decoupled VFL (DVFL), a blockwise learning approach to VFL. By training each
model on its own objective, DVFL allows for decentralized aggregation and
isolation between feature learning and label supervision. With these
properties, DVFL is fault tolerant and secure. We implement DVFL to train split
neural networks and show that model performance is comparable to VFL on a
variety of classification datasets.
Related papers
- De-VertiFL: A Solution for Decentralized Vertical Federated Learning [7.877130417748362]
This work introduces De-VertiFL, a novel solution for training models in a decentralized VFL setting.
De-VertiFL contributes by introducing a new network architecture distribution, an innovative knowledge exchange scheme, and a distributed federated training process.
The results demonstrate that De-VertiFL generally surpasses state-of-the-art methods in F1-score performance, while maintaining a decentralized and privacy-preserving framework.
arXiv Detail & Related papers (2024-10-08T15:31:10Z) - TabVFL: Improving Latent Representation in Vertical Federated Learning [6.602969765752305]
TabVFL is a distributed framework designed to improve latent representation learning using the joint features of participants.
In this paper, we propose TabVFL, a distributed framework designed to improve latent representation learning using the joint features of participants.
arXiv Detail & Related papers (2024-04-27T19:40:35Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Unlocking the Potential of Prompt-Tuning in Bridging Generalized and
Personalized Federated Learning [49.72857433721424]
Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve state-of-the-art performance with improved efficiency in various computer vision tasks.
We present a novel algorithm, SGPT, that integrates Generalized FL (GFL) and Personalized FL (PFL) approaches by employing a unique combination of both shared and group-specific prompts.
arXiv Detail & Related papers (2023-10-27T17:22:09Z) - BadVFL: Backdoor Attacks in Vertical Federated Learning [22.71527711053385]
Federated learning (FL) enables multiple parties to collaboratively train a machine learning model without sharing their data.
In this paper, we focus on robustness in VFL, in particular, on backdoor attacks.
We present a first-of-its-kind clean-label backdoor attack in VFL, which consists of two phases: a label inference and a backdoor phase.
arXiv Detail & Related papers (2023-04-18T09:22:32Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Vertical Semi-Federated Learning for Efficient Online Advertising [50.18284051956359]
Semi-VFL (Vertical Semi-Federated Learning) is proposed to achieve a practical industry application fashion for VFL.
We build an inference-efficient single-party student model applicable to the whole sample space.
New representation distillation methods are designed to extract cross-party feature correlations for both the overlapped and non-overlapped data.
arXiv Detail & Related papers (2022-09-30T17:59:27Z) - FairVFL: A Fair Vertical Federated Learning Framework with Contrastive
Adversarial Learning [102.92349569788028]
We propose a fair vertical federated learning framework (FairVFL) to improve the fairness of VFL models.
The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way.
For protecting user privacy, we propose a contrastive adversarial learning method to remove private information from the unified representation in server.
arXiv Detail & Related papers (2022-06-07T11:43:32Z) - Federated Learning from Only Unlabeled Data with
Class-Conditional-Sharing Clients [98.22390453672499]
Supervised federated learning (FL) enables multiple clients to share the trained model without sharing their labeled data.
We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients.
arXiv Detail & Related papers (2022-04-07T09:12:00Z) - Multi-Participant Multi-Class Vertical Federated Learning [16.75182305714081]
We propose the Multi-participant Multi-class Vertical Federated Learning (MMVFL) framework for multi-class VFL problems involving multiple parties.
MMVFL enables label sharing from its owner to other VFL participants in a privacypreserving manner.
Experiment results on real-world datasets show that MMVFL can effectively share label information among multiple VFL participants and match multi-class classification performance of existing approaches.
arXiv Detail & Related papers (2020-01-30T02:39:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.