Continual Horizontal Federated Learning for Heterogeneous Data
- URL: http://arxiv.org/abs/2203.02108v1
- Date: Fri, 4 Mar 2022 02:52:25 GMT
- Title: Continual Horizontal Federated Learning for Heterogeneous Data
- Authors: Junki Mori, Isamu Teranishi, Ryo Furukawa
- Abstract summary: Federated learning is a promising machine learning technique that enables multiple clients to collaboratively build a model without revealing the raw data to each other.
In this paper, we propose a HFL method using neural networks named continual horizontal federated learning (CHFL) to improve the performance of HFL by taking advantage of unique features of each client.
CHFL greatly outperforms vanilla HFL that only uses common features and local learning that uses all features that each client has.
- Score: 1.493231854066654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a promising machine learning technique that enables
multiple clients to collaboratively build a model without revealing the raw
data to each other. Among various types of federated learning methods,
horizontal federated learning (HFL) is the best-studied category and handles
homogeneous feature spaces. However, in the case of heterogeneous feature
spaces, HFL uses only common features and leaves client-specific features
unutilized. In this paper, we propose a HFL method using neural networks named
continual horizontal federated learning (CHFL), a continual learning approach
to improve the performance of HFL by taking advantage of unique features of
each client. CHFL splits the network into two columns corresponding to common
features and unique features, respectively. It jointly trains the first column
by using common features through vanilla HFL and locally trains the second
column by using unique features and leveraging the knowledge of the first one
via lateral connections without interfering with the federated training of it.
We conduct experiments on various real world datasets and show that CHFL
greatly outperforms vanilla HFL that only uses common features and local
learning that uses all features that each client has.
Related papers
- ICAFS: Inter-Client-Aware Feature Selection for Vertical Federated Learning [10.133952242666346]
Feature selection plays a crucial role in Vertical Federated Learning (VFL)
We introduce ICAFS, a novel multi-stage ensemble approach for effective FS in VFL by considering inter-client interactions.
Experiments on multiple real-world datasets demonstrate that ICAFS surpasses current state-of-the-art methods in prediction accuracy.
arXiv Detail & Related papers (2025-04-15T04:19:04Z) - De-VertiFL: A Solution for Decentralized Vertical Federated Learning [7.877130417748362]
This work introduces De-VertiFL, a novel solution for training models in a decentralized VFL setting.
De-VertiFL contributes by introducing a new network architecture distribution, an innovative knowledge exchange scheme, and a distributed federated training process.
The results demonstrate that De-VertiFL generally surpasses state-of-the-art methods in F1-score performance, while maintaining a decentralized and privacy-preserving framework.
arXiv Detail & Related papers (2024-10-08T15:31:10Z) - An Element-Wise Weights Aggregation Method for Federated Learning [11.9232569348563]
This paper introduces an innovative Element-Wise Weights Aggregation Method for Federated Learning (EWWA-FL)
EWWA-FL aggregates local weights to the global model at the level of individual elements, allowing each participating client to make element-wise contributions to the learning process.
By taking into account the unique dataset characteristics of each client, EWWA-FL enhances the robustness of the global model to different datasets.
arXiv Detail & Related papers (2024-04-24T15:16:06Z) - Fed-CO2: Cooperation of Online and Offline Models for Severe Data
Heterogeneity in Federated Learning [14.914477928398133]
Federated Learning (FL) has emerged as a promising distributed learning paradigm.
The effectiveness of FL is highly dependent on the quality of the data that is being used for training.
We propose Fed-CO$_2$, a universal FL framework that handles both label distribution skew and feature skew.
arXiv Detail & Related papers (2023-12-21T15:12:12Z) - FediOS: Decoupling Orthogonal Subspaces for Personalization in
Feature-skew Federated Learning [6.076894295435773]
In personalized federated learning (pFL), clients may have heterogeneous (also known as non-IID) data.
In FediOS, we reformulate the decoupling into two feature extractors (generic and personalized) and one shared prediction head.
In addition, a shared prediction head is trained to balance the importance of generic and personalized features during inference.
arXiv Detail & Related papers (2023-11-30T13:50:38Z) - Unlocking the Potential of Prompt-Tuning in Bridging Generalized and
Personalized Federated Learning [49.72857433721424]
Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve state-of-the-art performance with improved efficiency in various computer vision tasks.
We present a novel algorithm, SGPT, that integrates Generalized FL (GFL) and Personalized FL (PFL) approaches by employing a unique combination of both shared and group-specific prompts.
arXiv Detail & Related papers (2023-10-27T17:22:09Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - Collaborating Heterogeneous Natural Language Processing Tasks via
Federated Learning [55.99444047920231]
The proposed ATC framework achieves significant improvements compared with various baseline methods.
We conduct extensive experiments on six widely-used datasets covering both Natural Language Understanding (NLU) and Natural Language Generation (NLG) tasks.
arXiv Detail & Related papers (2022-12-12T09:27:50Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Vertical Semi-Federated Learning for Efficient Online Advertising [50.18284051956359]
Semi-VFL (Vertical Semi-Federated Learning) is proposed to achieve a practical industry application fashion for VFL.
We build an inference-efficient single-party student model applicable to the whole sample space.
New representation distillation methods are designed to extract cross-party feature correlations for both the overlapped and non-overlapped data.
arXiv Detail & Related papers (2022-09-30T17:59:27Z) - Splitfed learning without client-side synchronization: Analyzing
client-side split network portion size to overall performance [4.689140226545214]
Federated Learning (FL), Split Learning (SL), and SplitFed Learning (SFL) are three recent developments in distributed machine learning.
This paper studies SFL without client-side model synchronization.
It provides only 1%-2% better accuracy than Multi-head Split Learning on the MNIST test set.
arXiv Detail & Related papers (2021-09-19T22:57:23Z) - Multi-Center Federated Learning [62.32725938999433]
Federated learning (FL) can protect data privacy in distributed learning.
It merely collects local gradients from users without access to their data.
We propose a novel multi-center aggregation mechanism.
arXiv Detail & Related papers (2021-08-19T12:20:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.