Towards Active Participant-Centric Vertical Federated Learning: Some Representations May Be All You Need
- URL: http://arxiv.org/abs/2410.17648v1
- Date: Wed, 23 Oct 2024 08:07:00 GMT
- Title: Towards Active Participant-Centric Vertical Federated Learning: Some Representations May Be All You Need
- Authors: Jon Irureta, Jon Imaz, Aizea Lojo, Marco González, Iñigo Perona,
- Abstract summary: We introduce a novel simplified approach to Vertical Federated Learning (VFL)
Active Participant-Centric VFL allows the active participant to do inference in a non collaborative fashion.
This method integrates unsupervised representation learning with knowledge distillation to achieve comparable accuracy to traditional VFL methods.
- Score: 0.0
- License:
- Abstract: Vertical Federated Learning (VFL) enables collaborative model training across different participants with distinct features and common samples, while preserving data privacy. Existing VFL methodologies often struggle with realistic data partitions, typically incurring high communication costs and significant operational complexity. In this work, we introduce a novel simplified approach to VFL, Active Participant-Centric VFL (APC-VFL), that, to the best of our knowledge, is the first to require only a single communication round between participants, and allows the active participant to do inference in a non collaborative fashion. This method integrates unsupervised representation learning with knowledge distillation to achieve comparable accuracy to traditional VFL methods based on vertical split learning in classical settings, reducing required communication rounds by up to $4200\times$, while being more flexible. Our approach also shows improvements compared to non-federated local models, as well as a comparable VFL proposal, VFedTrans, offering an efficient and flexible solution for collaborative learning.
Related papers
- VFL-RPS: Relevant Participant Selection in Vertical Federated Learning [0.06181089784338582]
Federated Learning (FL) allows collaboration between different parties, while ensuring that the data across these parties is not shared.
We propose a novel method VFL-RPS for participant selection in VFL, as a pre-training step.
We show that our method outperforms existing methods for participant selection in VFL.
arXiv Detail & Related papers (2025-02-20T09:05:55Z) - Vertical Federated Learning in Practice: The Good, the Bad, and the Ugly [42.31182713177944]
This survey analyzes the real-world data distributions in potential Vertical Federated Learning (VFL) applications.
We propose a novel data-oriented taxonomy of VFL algorithms based on real VFL data distributions.
Based on these observations, we outline key research directions aimed at bridging the gap between current VFL research and real-world applications.
arXiv Detail & Related papers (2025-02-12T07:03:32Z) - Vertical Federated Learning for Effectiveness, Security, Applicability: A Survey [67.48187503803847]
Vertical Federated Learning (VFL) is a privacy-preserving distributed learning paradigm.
Recent research has shown promising results addressing various challenges in VFL.
This survey offers a systematic overview of recent developments.
arXiv Detail & Related papers (2024-05-25T16:05:06Z) - A Bargaining-based Approach for Feature Trading in Vertical Federated
Learning [54.51890573369637]
We propose a bargaining-based feature trading approach in Vertical Federated Learning (VFL) to encourage economically efficient transactions.
Our model incorporates performance gain-based pricing, taking into account the revenue-based optimization objectives of both parties.
arXiv Detail & Related papers (2024-02-23T10:21:07Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - BadVFL: Backdoor Attacks in Vertical Federated Learning [22.71527711053385]
Federated learning (FL) enables multiple parties to collaboratively train a machine learning model without sharing their data.
In this paper, we focus on robustness in VFL, in particular, on backdoor attacks.
We present a first-of-its-kind clean-label backdoor attack in VFL, which consists of two phases: a label inference and a backdoor phase.
arXiv Detail & Related papers (2023-04-18T09:22:32Z) - Vertical Semi-Federated Learning for Efficient Online Advertising [50.18284051956359]
Semi-VFL (Vertical Semi-Federated Learning) is proposed to achieve a practical industry application fashion for VFL.
We build an inference-efficient single-party student model applicable to the whole sample space.
New representation distillation methods are designed to extract cross-party feature correlations for both the overlapped and non-overlapped data.
arXiv Detail & Related papers (2022-09-30T17:59:27Z) - Low-Latency Cooperative Spectrum Sensing via Truncated Vertical
Federated Learning [51.51440623636274]
We propose a vertical federated learning (VFL) framework to exploit the distributed features across multiple secondary users (SUs) without compromising data privacy.
To accelerate the training process, we propose a truncated vertical federated learning (T-VFL) algorithm.
The convergence performance of T-VFL is provided via mathematical analysis and justified by simulation results.
arXiv Detail & Related papers (2022-08-07T10:39:27Z) - Towards Communication-efficient Vertical Federated Learning Training via
Cache-enabled Local Updates [25.85564668511386]
We introduce CELU-VFL, a novel and efficient Vertical Learning framework.
CELU-VFL exploits the local update technique to reduce the cross-party communication rounds.
We show that CELU-VFL can be up to six times faster than the existing works.
arXiv Detail & Related papers (2022-07-29T12:10:36Z) - Achieving Model Fairness in Vertical Federated Learning [47.8598060954355]
Vertical federated learning (VFL) enables multiple enterprises possessing non-overlapped features to strengthen their machine learning models without disclosing their private data and model parameters.
VFL suffers from fairness issues, i.e., the learned model may be unfairly discriminatory over the group with sensitive attributes.
We propose a fair VFL framework to tackle this problem.
arXiv Detail & Related papers (2021-09-17T04:40:11Z) - Multi-Participant Multi-Class Vertical Federated Learning [16.75182305714081]
We propose the Multi-participant Multi-class Vertical Federated Learning (MMVFL) framework for multi-class VFL problems involving multiple parties.
MMVFL enables label sharing from its owner to other VFL participants in a privacypreserving manner.
Experiment results on real-world datasets show that MMVFL can effectively share label information among multiple VFL participants and match multi-class classification performance of existing approaches.
arXiv Detail & Related papers (2020-01-30T02:39:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.