Multi-Level Branched Regularization for Federated Learning
- URL: http://arxiv.org/abs/2207.06936v1
- Date: Thu, 14 Jul 2022 13:59:26 GMT
- Title: Multi-Level Branched Regularization for Federated Learning
- Authors: Jinkyu Kim, Geeho Kim and Bohyung Han
- Abstract summary: We propose a novel architectural regularization technique that constructs multiple auxiliary branches in each local model by grafting local and globalworks at several different levels.
We demonstrate remarkable performance gains in terms of accuracy and efficiency compared to existing methods.
- Score: 46.771459325434535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A critical challenge of federated learning is data heterogeneity and
imbalance across clients, which leads to inconsistency between local networks
and unstable convergence of global models. To alleviate the limitations, we
propose a novel architectural regularization technique that constructs multiple
auxiliary branches in each local model by grafting local and global subnetworks
at several different levels and that learns the representations of the main
pathway in the local model congruent to the auxiliary hybrid pathways via
online knowledge distillation. The proposed technique is effective to robustify
the global model even in the non-iid setting and is applicable to various
federated learning frameworks conveniently without incurring extra
communication costs. We perform comprehensive empirical studies and demonstrate
remarkable performance gains in terms of accuracy and efficiency compared to
existing methods. The source code is available at our project page.
Related papers
- Proximity-based Self-Federated Learning [1.0066310107046081]
This paper introduces a novel, fully-distributed federated learning strategy called proximity-based self-federated learning.
Unlike traditional algorithms, our approach encourages clients to share and adjust their models with neighbouring nodes based on geographic proximity and model accuracy.
arXiv Detail & Related papers (2024-07-17T08:44:45Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Generalizable Heterogeneous Federated Cross-Correlation and Instance
Similarity Learning [60.058083574671834]
This paper presents a novel FCCL+, federated correlation and similarity learning with non-target distillation.
For heterogeneous issue, we leverage irrelevant unlabeled public data for communication.
For catastrophic forgetting in local updating stage, FCCL+ introduces Federated Non Target Distillation.
arXiv Detail & Related papers (2023-09-28T09:32:27Z) - UNIDEAL: Curriculum Knowledge Distillation Federated Learning [17.817181326740698]
Federated Learning (FL) has emerged as a promising approach to enable collaborative learning among multiple clients.
In this paper, we present UNI, a novel FL algorithm specifically designed to tackle the challenges of cross-domain scenarios.
Our results demonstrate that UNI achieves superior performance in terms of both model accuracy and communication efficiency.
arXiv Detail & Related papers (2023-09-16T11:30:29Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - CDKT-FL: Cross-Device Knowledge Transfer using Proxy Dataset in Federated Learning [27.84845136697669]
We develop a novel knowledge distillation-based approach to study the extent of knowledge transfer between the global model and local models.
We show the proposed method achieves significant speedups and high personalized performance of local models.
arXiv Detail & Related papers (2022-04-04T14:49:19Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.