Edge-assisted Democratized Learning Towards Federated Analytics
- URL: http://arxiv.org/abs/2012.00425v2
- Date: Wed, 3 Mar 2021 03:08:31 GMT
- Title: Edge-assisted Democratized Learning Towards Federated Analytics
- Authors: Shashi Raj Pandey, Minh N.H. Nguyen, Tri Nguyen Dang, Nguyen H. Tran,
Kyi Thar, Zhu Han, Choong Seon Hong
- Abstract summary: We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
- Score: 67.44078999945722
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A recent take towards Federated Analytics (FA), which allows analytical
insights of distributed datasets, reuses the Federated Learning (FL)
infrastructure to evaluate the summary of model performances across the
training devices. However, the current realization of FL adopts single
server-multiple client architecture with limited scope for FA, which often
results in learning models with poor generalization, i.e., an ability to handle
new/unseen data, for real-world applications. Moreover, a hierarchical FL
structure with distributed computing platforms demonstrates incoherent model
performances at different aggregation levels. Therefore, we need to design a
robust learning mechanism than the FL that (i) unleashes a viable
infrastructure for FA and (ii) trains learning models with better
generalization capability. In this work, we adopt the novel democratized
learning (Dem-AI) principles and designs to meet these objectives. Firstly, we
show the hierarchical learning structure of the proposed edge-assisted
democratized learning mechanism, namely Edge-DemLearn, as a practical framework
to empower generalization capability in support of FA. Secondly, we validate
Edge-DemLearn as a flexible model training mechanism to build a distributed
control and aggregation methodology in regions by leveraging the distributed
computing infrastructure. The distributed edge computing servers construct
regional models, minimize the communication loads, and ensure distributed data
analytic application's scalability. To that end, we adhere to a near-optimal
two-sided many-to-one matching approach to handle the combinatorial constraints
in Edge-DemLearn and solve it for fast knowledge acquisition with optimization
of resource allocation and associations between multiple servers and devices.
Extensive simulation results on real datasets demonstrate the effectiveness of
the proposed methods.
Related papers
- Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Heterogeneous Ensemble Knowledge Transfer for Training Large Models in
Federated Learning [22.310090483499035]
Federated learning (FL) enables edge-devices to collaboratively learn a model without disclosing their private data to a central aggregating server.
Most existing FL algorithms require models of identical architecture to be deployed across the clients and server.
We propose a novel ensemble knowledge transfer method named Fed-ET in which small models are trained on clients, and used to train a larger model at the server.
arXiv Detail & Related papers (2022-04-27T05:18:32Z) - Federated Multi-Task Learning under a Mixture of Distributions [10.00087964926414]
Federated Learning (FL) is a framework for on-device collaborative training of machine learning models.
First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client.
We study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions.
arXiv Detail & Related papers (2021-08-23T15:47:53Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.