UNIDEAL: Curriculum Knowledge Distillation Federated Learning
- URL: http://arxiv.org/abs/2309.08961v1
- Date: Sat, 16 Sep 2023 11:30:29 GMT
- Title: UNIDEAL: Curriculum Knowledge Distillation Federated Learning
- Authors: Yuwen Yang, Chang Liu, Xun Cai, Suizhi Huang, Hongtao Lu, Yue Ding
- Abstract summary: Federated Learning (FL) has emerged as a promising approach to enable collaborative learning among multiple clients.
In this paper, we present UNI, a novel FL algorithm specifically designed to tackle the challenges of cross-domain scenarios.
Our results demonstrate that UNI achieves superior performance in terms of both model accuracy and communication efficiency.
- Score: 17.817181326740698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) has emerged as a promising approach to enable
collaborative learning among multiple clients while preserving data privacy.
However, cross-domain FL tasks, where clients possess data from different
domains or distributions, remain a challenging problem due to the inherent
heterogeneity. In this paper, we present UNIDEAL, a novel FL algorithm
specifically designed to tackle the challenges of cross-domain scenarios and
heterogeneous model architectures. The proposed method introduces Adjustable
Teacher-Student Mutual Evaluation Curriculum Learning, which significantly
enhances the effectiveness of knowledge distillation in FL settings. We conduct
extensive experiments on various datasets, comparing UNIDEAL with
state-of-the-art baselines. Our results demonstrate that UNIDEAL achieves
superior performance in terms of both model accuracy and communication
efficiency. Additionally, we provide a convergence analysis of the algorithm,
showing a convergence rate of O(1/T) under non-convex conditions.
Related papers
- FLASH: Federated Learning Across Simultaneous Heterogeneities [54.80435317208111]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.
It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.
It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - Federated Learning Can Find Friends That Are Advantageous [14.993730469216546]
In Federated Learning (FL), the distributed nature and heterogeneity of client data present both opportunities and challenges.
We introduce a novel algorithm that assigns adaptive aggregation weights to clients participating in FL training, identifying those with data distributions most conducive to a specific learning objective.
arXiv Detail & Related papers (2024-02-07T17:46:37Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - When Do Curricula Work in Federated Learning? [56.88941905240137]
We find that curriculum learning largely alleviates non-IIDness.
The more disparate the data distributions across clients the more they benefit from learning.
We propose a novel client selection technique that benefits from the real-world disparity in the clients.
arXiv Detail & Related papers (2022-12-24T11:02:35Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Federated Ensemble Model-based Reinforcement Learning in Edge Computing [21.840086997141498]
Federated learning (FL) is a privacy-preserving distributed machine learning paradigm.
We propose a novel FRL algorithm that effectively incorporates model-based RL and ensemble knowledge distillation into FL for the first time.
Specifically, we utilise FL and knowledge distillation to create an ensemble of dynamics models for clients, and then train the policy by solely using the ensemble model without interacting with the environment.
arXiv Detail & Related papers (2021-09-12T16:19:10Z) - Improving Federated Relational Data Modeling via Basis Alignment and
Weight Penalty [18.096788806121754]
Federated learning (FL) has attracted increasing attention in recent years.
We present a modified version of the graph neural network algorithm that performs federated modeling over Knowledge Graph (KG)
We propose a novel optimization algorithm, named FedAlign, with 1) optimal transportation (OT) for on-client personalization and 2) weight constraint to speed up the convergence.
Empirical results show that our proposed method outperforms the state-of-the-art FL methods, such as FedAVG and FedProx, with better convergence.
arXiv Detail & Related papers (2020-11-23T12:52:18Z) - A Theoretical Perspective on Differentially Private Federated Multi-task
Learning [12.935153199667987]
collaborative learning models need to be developed with respect to both privacy and utility concerns.
We propose a new federated multi-task for effective parameter transfer differential privacy to protect at the client level.
We are the first to provide both privacy utility guarantees for such a proposed algorithm.
arXiv Detail & Related papers (2020-11-14T00:53:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.