IPLS : A Framework for Decentralized Federated Learning
- URL: http://arxiv.org/abs/2101.01901v1
- Date: Wed, 6 Jan 2021 07:44:51 GMT
- Title: IPLS : A Framework for Decentralized Federated Learning
- Authors: Christodoulos Pappas, Dimitris Chatzopoulos, Spyros Lalis, Manolis
Vavalis
- Abstract summary: We introduce IPLS, a fully decentralized federated learning framework that is partially based on the interplanetary file system (IPFS)
IPLS scales with the number of participants, is robust against intermittent connectivity and dynamic participant departures/arrivals, requires minimal resources, and guarantees that the accuracy of the trained model quickly converges to that of a centralized FL framework with an accuracy drop of less than one per thousand.
- Score: 6.6271520914941435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The proliferation of resourceful mobile devices that store rich,
multidimensional and privacy-sensitive user data motivate the design of
federated learning (FL), a machine-learning (ML) paradigm that enables mobile
devices to produce an ML model without sharing their data. However, the
majority of the existing FL frameworks rely on centralized entities. In this
work, we introduce IPLS, a fully decentralized federated learning framework
that is partially based on the interplanetary file system (IPFS). By using IPLS
and connecting into the corresponding private IPFS network, any party can
initiate the training process of an ML model or join an ongoing training
process that has already been started by another party. IPLS scales with the
number of participants, is robust against intermittent connectivity and dynamic
participant departures/arrivals, requires minimal resources, and guarantees
that the accuracy of the trained model quickly converges to that of a
centralized FL framework with an accuracy drop of less than one per thousand.
Related papers
- Fedstellar: A Platform for Decentralized Federated Learning [10.014744081331672]
In 2016, Google proposed Federated Learning (FL) as a novel paradigm to train Machine Learning (ML) models across the participants of a federation.
This paper presents Fedstellar, a platform designed to train FL models in a decentralized, semi-decentralized, and centralized fashion across diverse federations.
arXiv Detail & Related papers (2023-06-16T10:34:49Z) - Towards More Suitable Personalization in Federated Learning via
Decentralized Partial Model Training [67.67045085186797]
Almost all existing systems have to face large communication burdens if the central FL server fails.
It personalizes the "right" in the deep models by alternately updating the shared and personal parameters.
To further promote the shared parameters aggregation process, we propose DFed integrating the local Sharpness Miniization.
arXiv Detail & Related papers (2023-05-24T13:52:18Z) - Hierarchical Personalized Federated Learning Over Massive Mobile Edge
Computing Networks [95.39148209543175]
We propose hierarchical PFL (HPFL), an algorithm for deploying PFL over massive MEC networks.
HPFL combines the objectives of training loss minimization and round latency minimization while jointly determining the optimal bandwidth allocation.
arXiv Detail & Related papers (2023-03-19T06:00:05Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Enhanced Decentralized Federated Learning based on Consensus in
Connected Vehicles [14.80476265018825]
Federated learning (FL) is emerging as a new paradigm to train machine learning (ML) models in distributed systems.
We introduce C-DFL (Consensus based Decentralized Federated Learning) to tackle federated learning on connected vehicles.
arXiv Detail & Related papers (2022-09-22T01:21:23Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z) - Advancements of federated learning towards privacy preservation: from
federated learning to split learning [1.3700362496838854]
In distributed collaborative machine learning (DCML) paradigm, federated learning (FL) recently attracted much attention due to its applications in health, finance, and the latest innovations such as industry 4.0 and smart vehicles.
In practical scenarios, all clients do not have sufficient computing resources (e.g., Internet of Things), the machine learning model has millions of parameters, and its privacy between the server and the clients is a prime concern.
Recently, a hybrid of FL and SL, called splitfed learning, is introduced to elevate the benefits of both FL (faster training/testing time) and SL (model split and
arXiv Detail & Related papers (2020-11-25T05:01:33Z) - Federated Learning with Cooperating Devices: A Consensus Approach for
Massive IoT Networks [8.456633924613456]
Federated learning (FL) is emerging as a new paradigm to train machine learning models in distributed systems.
The paper proposes a fully distributed (or server-less) learning approach: the proposed FL algorithms leverage the cooperation of devices that perform data operations inside the network.
The approach lays the groundwork for integration of FL within 5G and beyond networks characterized by decentralized connectivity and computing.
arXiv Detail & Related papers (2019-12-27T15:16:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.