VFedMH: Vertical Federated Learning for Training Multiple Heterogeneous
Models
- URL: http://arxiv.org/abs/2310.13367v2
- Date: Thu, 8 Feb 2024 08:24:53 GMT
- Title: VFedMH: Vertical Federated Learning for Training Multiple Heterogeneous
Models
- Authors: Shuo Wang and Keke Gai and Jing Yu and Liehuang Zhu and Kim-Kwang
Raymond Choo and Bin Xiao
- Abstract summary: This paper proposes a novel approach called Vertical federated learning for training multiple Heterogeneous models (VFedMH)
To protect the participants' local embedding values, we propose an embedding protection method based on lightweight blinding factors.
Experiments are conducted to demonstrate that VFedMH can simultaneously train multiple heterogeneous models with heterogeneous optimization and outperform some recent methods in model performance.
- Score: 53.30484242706966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vertical federated learning has garnered significant attention as it allows
clients to train machine learning models collaboratively without sharing local
data, which protects the client's local private data. However, existing VFL
methods face challenges when dealing with heterogeneous local models among
participants, which affects optimization convergence and generalization. To
address this challenge, this paper proposes a novel approach called Vertical
federated learning for training multiple Heterogeneous models (VFedMH). VFedMH
focuses on aggregating the local embeddings of each participant's knowledge
during forward propagation. To protect the participants' local embedding
values, we propose an embedding protection method based on lightweight blinding
factors. In particular, participants obtain local embedding using local
heterogeneous models. Then the passive party, who owns only features of the
sample, injects the blinding factor into the local embedding and sends it to
the active party. The active party aggregates local embeddings to obtain global
knowledge embeddings and sends them to passive parties. The passive parties
then utilize the global embeddings to propagate forward on their local
heterogeneous networks. However, the passive party does not own the sample
labels, so the local model gradient cannot be calculated locally. To overcome
this limitation, the active party assists the passive party in computing its
local heterogeneous model gradients. Then, each participant trains their local
model using the heterogeneous model gradients. The objective is to minimize the
loss value of their respective local heterogeneous models. Extensive
experiments are conducted to demonstrate that VFedMH can simultaneously train
multiple heterogeneous models with heterogeneous optimization and outperform
some recent methods in model performance.
Related papers
- Vertical Federated Learning Hybrid Local Pre-training [4.31644387824845]
We propose a novel VFL Hybrid Local Pre-training (VFLHLP) approach for Vertical Federated Learning (VFL)
VFLHLP first pre-trains local networks on the local data of participating parties.
Then it utilizes these pre-trained networks to adjust the sub-model for the labeled party or enhance representation learning for other parties during downstream federated learning on aligned data.
arXiv Detail & Related papers (2024-05-20T08:57:39Z) - FedDistill: Global Model Distillation for Local Model De-Biasing in Non-IID Federated Learning [10.641875933652647]
Federated Learning (FL) is a novel approach that allows for collaborative machine learning.
FL faces challenges due to non-uniformly distributed (non-iid) data across clients.
This paper introduces FedDistill, a framework enhancing the knowledge transfer from the global model to local models.
arXiv Detail & Related papers (2024-04-14T10:23:30Z) - pFedMoE: Data-Level Personalization with Mixture of Experts for
Model-Heterogeneous Personalized Federated Learning [35.72303739409116]
We propose a model-heterogeneous personalized Federated learning with Mixture of Experts (pFedMoE) method.
It assigns a shared homogeneous small feature extractor and a local gating network for each client's local heterogeneous large model.
Overall, pFedMoE enhances local model personalization at a fine-grained data level.
arXiv Detail & Related papers (2024-02-02T12:09:20Z) - Federated Learning via Input-Output Collaborative Distillation [40.38454921071808]
Federated learning (FL) is a machine learning paradigm in which distributed local nodes collaboratively train a central model without sharing individually held private data.
We propose a data-free FL framework based on local-to-central collaborative distillation with direct input and output space exploitation.
arXiv Detail & Related papers (2023-12-22T07:05:13Z) - pFedES: Model Heterogeneous Personalized Federated Learning with Feature
Extractor Sharing [19.403843478569303]
We propose a model-heterogeneous personalized Federated learning approach based on feature extractor sharing.
It incorporates a small homogeneous feature extractor into each client's heterogeneous local model.
It achieves 1.61% higher test accuracy, while reducing communication and computation costs by 99.6% and 82.9%, respectively.
arXiv Detail & Related papers (2023-11-12T15:43:39Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Improving Heterogeneous Model Reuse by Density Estimation [105.97036205113258]
This paper studies multiparty learning, aiming to learn a model using the private data of different participants.
Model reuse is a promising solution for multiparty learning, assuming that a local model has been trained for each party.
arXiv Detail & Related papers (2023-05-23T09:46:54Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - Federated and Generalized Person Re-identification through Domain and
Feature Hallucinating [88.77196261300699]
We study the problem of federated domain generalization (FedDG) for person re-identification (re-ID)
We propose a novel method, called "Domain and Feature Hallucinating (DFH)", to produce diverse features for learning generalized local and global models.
Our method achieves the state-of-the-art performance for FedDG on four large-scale re-ID benchmarks.
arXiv Detail & Related papers (2022-03-05T09:15:13Z) - FedH2L: Federated Learning with Model and Statistical Heterogeneity [75.61234545520611]
Federated learning (FL) enables distributed participants to collectively learn a strong global model without sacrificing their individual data privacy.
We introduce FedH2L, which is agnostic to both the model architecture and robust to different data distributions across participants.
In contrast to approaches sharing parameters or gradients, FedH2L relies on mutual distillation, exchanging only posteriors on a shared seed set between participants in a decentralized manner.
arXiv Detail & Related papers (2021-01-27T10:10:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.