FedHeN: Federated Learning in Heterogeneous Networks
- URL: http://arxiv.org/abs/2207.03031v1
- Date: Thu, 7 Jul 2022 01:08:35 GMT
- Title: FedHeN: Federated Learning in Heterogeneous Networks
- Authors: Durmus Alp Emre Acar, Venkatesh Saligrama
- Abstract summary: We propose a novel training recipe for federated learning with heterogeneous networks.
We introduce training with a side objective to the devices of higher complexities to jointly train different architectures in a federated setting.
- Score: 52.29110497518558
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel training recipe for federated learning with heterogeneous
networks where each device can have different architectures. We introduce
training with a side objective to the devices of higher complexities to jointly
train different architectures in a federated setting. We empirically show that
our approach improves the performance of different architectures and leads to
high communication savings compared to the state-of-the-art methods.
Related papers
- Towards Mitigating Architecture Overfitting in Dataset Distillation [2.7610336610850292]
We propose a series of approaches in both architecture designs and training schemes to boost the generalization performance.
We conduct extensive experiments to demonstrate the effectiveness and generality of our methods.
arXiv Detail & Related papers (2023-09-08T08:12:29Z) - Heterogeneous Continual Learning [88.53038822561197]
We propose a novel framework to tackle the continual learning (CL) problem with changing network architectures.
We build on top of the distillation family of techniques and modify it to a new setting where a weaker model takes the role of a teacher.
We also propose Quick Deep Inversion (QDI) to recover prior task visual features to support knowledge transfer.
arXiv Detail & Related papers (2023-06-14T15:54:42Z) - Multi-Level Branched Regularization for Federated Learning [46.771459325434535]
We propose a novel architectural regularization technique that constructs multiple auxiliary branches in each local model by grafting local and globalworks at several different levels.
We demonstrate remarkable performance gains in terms of accuracy and efficiency compared to existing methods.
arXiv Detail & Related papers (2022-07-14T13:59:26Z) - Supernet Training for Federated Image Classification under System
Heterogeneity [15.2292571922932]
In this work, we propose a novel framework to consider both scenarios, namely Federation of Supernet Training (FedSup)
It is inspired by how averaging parameters in the model aggregation stage of Federated Learning (FL) is similar to weight-sharing in supernet training.
Under our framework, we present an efficient algorithm (E-FedSup) by sending the sub-model to clients in the broadcast stage for reducing communication costs and training overhead.
arXiv Detail & Related papers (2022-06-03T02:21:01Z) - FedHe: Heterogeneous Models and Communication-Efficient Federated
Learning [0.0]
Federated learning (FL) is able to manage edge devices to cooperatively train a model while maintaining the training data local and private.
We propose a novel FL method, called FedHe, inspired by knowledge distillation, which can train heterogeneous models and support asynchronous training processes.
arXiv Detail & Related papers (2021-10-19T12:18:37Z) - Rethinking Architecture Design for Tackling Data Heterogeneity in
Federated Learning [53.73083199055093]
We show that attention-based architectures (e.g., Transformers) are fairly robust to distribution shifts.
Our experiments show that replacing convolutional networks with Transformers can greatly reduce catastrophic forgetting of previous devices.
arXiv Detail & Related papers (2021-06-10T21:04:18Z) - Joint Search of Data Augmentation Policies and Network Architectures [4.887917220146243]
The proposed method combines differentiable methods for augmentation policy search and network architecture search to jointly optimize them in the end-to-end manner.
experimental results show our method achieves competitive or superior performance to the independently searched results.
arXiv Detail & Related papers (2020-12-17T06:09:44Z) - From Federated to Fog Learning: Distributed Machine Learning over
Heterogeneous Wireless Networks [71.23327876898816]
Federated learning has emerged as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data.
We advocate a new learning paradigm called fog learning which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
arXiv Detail & Related papers (2020-06-07T05:11:18Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.