Vertical Semi-Federated Learning for Efficient Online Advertising
- URL: http://arxiv.org/abs/2209.15635v2
- Date: Sat, 1 Jul 2023 14:41:25 GMT
- Title: Vertical Semi-Federated Learning for Efficient Online Advertising
- Authors: Wenjie Li, Qiaolin Xia, Hao Cheng, Kouyin Xue, Shu-Tao Xia
- Abstract summary: Semi-VFL (Vertical Semi-Federated Learning) is proposed to achieve a practical industry application fashion for VFL.
We build an inference-efficient single-party student model applicable to the whole sample space.
New representation distillation methods are designed to extract cross-party feature correlations for both the overlapped and non-overlapped data.
- Score: 50.18284051956359
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The traditional vertical federated learning schema suffers from two main
issues: 1) restricted applicable scope to overlapped samples and 2) high system
challenge of real-time federated serving, which limits its application to
advertising systems. To this end, we advocate a new learning setting Semi-VFL
(Vertical Semi-Federated Learning) to tackle these challenge. Semi-VFL is
proposed to achieve a practical industry application fashion for VFL, by
learning a federation-aware local model which performs better than single-party
models and meanwhile maintain the convenience of local-serving. For this
purpose, we propose the carefully designed Joint Privileged Learning framework
(JPL) to i) alleviate the absence of the passive party's feature and ii) adapt
to the whole sample space. Specifically, we build an inference-efficient
single-party student model applicable to the whole sample space and meanwhile
maintain the advantage of the federated feature extension. New representation
distillation methods are designed to extract cross-party feature correlations
for both the overlapped and non-overlapped data. We conducted extensive
experiments on real-world advertising datasets. The results show that our
method achieves the best performance over baseline methods and validate its
superiority in the Semi-VFL setting.
Related papers
- Towards Active Participant-Centric Vertical Federated Learning: Some Representations May Be All You Need [0.0]
We introduce a novel simplified approach to Vertical Federated Learning (VFL)
Active Participant-Centric VFL allows the active participant to do inference in a non collaborative fashion.
This method integrates unsupervised representation learning with knowledge distillation to achieve comparable accuracy to traditional VFL methods.
arXiv Detail & Related papers (2024-10-23T08:07:00Z) - De-VertiFL: A Solution for Decentralized Vertical Federated Learning [7.877130417748362]
This work introduces De-VertiFL, a novel solution for training models in a decentralized VFL setting.
De-VertiFL contributes by introducing a new network architecture distribution, an innovative knowledge exchange scheme, and a distributed federated training process.
The results demonstrate that De-VertiFL generally surpasses state-of-the-art methods in F1-score performance, while maintaining a decentralized and privacy-preserving framework.
arXiv Detail & Related papers (2024-10-08T15:31:10Z) - A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs [57.35402286842029]
We propose a novel Aligned Dual Dual (A-FedPD) method, which constructs virtual dual align global and local clients.
We provide a comprehensive analysis of the A-FedPD method's efficiency for those protracted unicipated security consensus.
arXiv Detail & Related papers (2024-09-27T17:00:32Z) - Vertical Federated Learning Hybrid Local Pre-training [4.31644387824845]
We propose a novel VFL Hybrid Local Pre-training (VFLHLP) approach for Vertical Federated Learning (VFL)
VFLHLP first pre-trains local networks on the local data of participating parties.
Then it utilizes these pre-trained networks to adjust the sub-model for the labeled party or enhance representation learning for other parties during downstream federated learning on aligned data.
arXiv Detail & Related papers (2024-05-20T08:57:39Z) - Unlocking the Potential of Prompt-Tuning in Bridging Generalized and
Personalized Federated Learning [49.72857433721424]
Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve state-of-the-art performance with improved efficiency in various computer vision tasks.
We present a novel algorithm, SGPT, that integrates Generalized FL (GFL) and Personalized FL (PFL) approaches by employing a unique combination of both shared and group-specific prompts.
arXiv Detail & Related papers (2023-10-27T17:22:09Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Towards Fairer and More Efficient Federated Learning via
Multidimensional Personalized Edge Models [36.84027517814128]
Federated learning (FL) trains massive and geographically distributed edge data while maintaining privacy.
We propose a Customized Federated Learning (CFL) system to eliminate FL heterogeneity from multiple dimensions.
CFL tailors personalized models from the specially designed global model for each client jointly guided by an online trained model-search helper and a novel aggregation algorithm.
arXiv Detail & Related papers (2023-02-09T06:55:19Z) - VFed-SSD: Towards Practical Vertical Federated Advertising [53.08038962443853]
We propose a semi-supervised split distillation framework VFed-SSD to alleviate the two limitations.
Specifically, we develop a self-supervised task MatchedPair Detection (MPD) to exploit the vertically partitioned unlabeled data.
Our framework provides an efficient federation-enhanced solution for real-time display advertising with minimal deploying cost and significant performance lift.
arXiv Detail & Related papers (2022-05-31T17:45:30Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - FedSemi: An Adaptive Federated Semi-Supervised Learning Framework [23.90642104477983]
Federated learning (FL) has emerged as an effective technique to co-training machine learning models without actually sharing data and leaking privacy.
Most existing FL methods focus on the supervised setting and ignore the utilization of unlabeled data.
We propose FedSemi, a novel, adaptive, and general framework, which firstly introduces the consistency regularization into FL using a teacher-student model.
arXiv Detail & Related papers (2020-12-06T15:46:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.