Efficient Fully Distributed Federated Learning with Adaptive Local Links
- URL: http://arxiv.org/abs/2203.12281v1
- Date: Wed, 23 Mar 2022 09:03:54 GMT
- Title: Efficient Fully Distributed Federated Learning with Adaptive Local Links
- Authors: Evangelos Georgatos, Christos Mavrokefalidis, Kostas Berberidis
- Abstract summary: We propose a fully distributed, diffusion-based learning algorithm that does not require a central server.
By adopting a classification task on the MNIST dataset, the efficacy of the proposed algorithm is demonstrated.
- Score: 1.8416014644193066
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Nowadays, data-driven, machine and deep learning approaches have provided
unprecedented performance in various complex tasks, including image
classification and object detection, and in a variety of application areas,
like autonomous vehicles, medical imaging and wireless communications.
Traditionally, such approaches have been deployed, along with the involved
datasets, on standalone devices. Recently, a shift has been observed towards
the so-called Edge Machine Learning, in which centralized architectures are
adopted that allow multiple devices with local computational and storage
resources to collaborate with the assistance of a centralized server. The
well-known federated learning approach is able to utilize such architectures by
allowing the exchange of only parameters with the server, while keeping the
datasets private to each contributing device. In this work, we propose a fully
distributed, diffusion-based learning algorithm that does not require a central
server and propose an adaptive combination rule for the cooperation of the
devices. By adopting a classification task on the MNIST dataset, the efficacy
of the proposed algorithm over corresponding counterparts is demonstrated via
the reduction of the number of collaboration rounds required to achieve an
acceptable accuracy level in non- IID dataset scenarios.
Related papers
- Coordination-free Decentralised Federated Learning on Complex Networks:
Overcoming Heterogeneity [2.6849848612544]
Federated Learning (FL) is a framework for performing a learning task in an edge computing scenario.
We propose a communication-efficient Decentralised Federated Learning (DFL) algorithm able to cope with them.
Our solution allows devices communicating only with their direct neighbours to train an accurate model.
arXiv Detail & Related papers (2023-12-07T18:24:19Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Adaptive Scheduling for Machine Learning Tasks over Networks [1.4271989597349055]
This paper examines algorithms for efficiently allocating resources to linear regression tasks by exploiting the informativeness of the data.
The algorithms developed enable adaptive scheduling of learning tasks with reliable performance guarantees.
arXiv Detail & Related papers (2021-01-25T10:59:00Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z) - IBM Federated Learning: an Enterprise Framework White Paper V0.1 [28.21579297214125]
Federated Learning (FL) is an approach to conduct machine learning without centralizing training data in a single place.
The framework applies to both Deep Neural Networks as well as traditional'' approaches for the most common machine learning libraries.
arXiv Detail & Related papers (2020-07-22T05:32:00Z) - Federated Learning with Cooperating Devices: A Consensus Approach for
Massive IoT Networks [8.456633924613456]
Federated learning (FL) is emerging as a new paradigm to train machine learning models in distributed systems.
The paper proposes a fully distributed (or server-less) learning approach: the proposed FL algorithms leverage the cooperation of devices that perform data operations inside the network.
The approach lays the groundwork for integration of FL within 5G and beyond networks characterized by decentralized connectivity and computing.
arXiv Detail & Related papers (2019-12-27T15:16:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.