Auction Based Clustered Federated Learning in Mobile Edge Computing
System
- URL: http://arxiv.org/abs/2103.07150v1
- Date: Fri, 12 Mar 2021 08:54:27 GMT
- Title: Auction Based Clustered Federated Learning in Mobile Edge Computing
System
- Authors: Renhao Lu, Weizhe Zhang, Qiong Li, Xiaoxiong Zhong and Athanasios V.
Vasilakos
- Abstract summary: Federated learning is a distributed machine learning solution that uses local computing and local data to train the Artificial Intelligence (AI) model.
We propose a cluster-based clients selection method that can generate a federated virtual dataset that satisfies the global distribution.
We show that our proposed selection methods and auction-based federated learning can achieve better performance with the Convolutional Neural Network model (CNN) under different data distributions.
- Score: 13.710325615076687
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, mobile clients' computing ability and storage capacity have
greatly improved, efficiently dealing with some applications locally. Federated
learning is a promising distributed machine learning solution that uses local
computing and local data to train the Artificial Intelligence (AI) model.
Combining local computing and federated learning can train a powerful AI model
under the premise of ensuring local data privacy while making full use of
mobile clients' resources. However, the heterogeneity of local data, that is,
Non-independent and identical distribution (Non-IID) and imbalance of local
data size, may bring a bottleneck hindering the application of federated
learning in mobile edge computing (MEC) system. Inspired by this, we propose a
cluster-based clients selection method that can generate a federated virtual
dataset that satisfies the global distribution to offset the impact of data
heterogeneity and proved that the proposed scheme could converge to an
approximate optimal solution. Based on the clustering method, we propose an
auction-based clients selection scheme within each cluster that fully considers
the system's energy heterogeneity and gives the Nash equilibrium solution of
the proposed scheme for balance the energy consumption and improving the
convergence rate. The simulation results show that our proposed selection
methods and auction-based federated learning can achieve better performance
with the Convolutional Neural Network model (CNN) under different data
distributions.
Related papers
- Fed-QSSL: A Framework for Personalized Federated Learning under Bitwidth
and Data Heterogeneity [14.313847382199059]
Federated quantization-based self-supervised learning scheme (Fed-QSSL) designed to address heterogeneity in FL systems.
Fed-QSSL deploys de-quantization, weighted aggregation and re-quantization, ultimately creating models personalized to both data distribution and specific infrastructure of each client's device.
arXiv Detail & Related papers (2023-12-20T19:11:19Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Federated cINN Clustering for Accurate Clustered Federated Learning [33.72494731516968]
Federated Learning (FL) presents an innovative approach to privacy-preserving distributed machine learning.
We propose the Federated cINN Clustering Algorithm (FCCA) to robustly cluster clients into different groups.
arXiv Detail & Related papers (2023-09-04T10:47:52Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - Reinforcement Federated Learning Method Based on Adaptive OPTICS
Clustering [19.73560248813166]
This paper proposes an adaptive OPTICS clustering algorithm for federated learning.
By perceiving the clustering environment as a Markov decision process, the goal is to find the best parameters of the OPTICS cluster.
The reliability and practicability of this method have been verified on the experimental data, and its effec-tiveness and superiority have been proved.
arXiv Detail & Related papers (2023-06-22T13:11:19Z) - SphereFed: Hyperspherical Federated Learning [22.81101040608304]
Key challenge is the handling of non-i.i.d. data across multiple clients.
We introduce the Hyperspherical Federated Learning (SphereFed) framework to address the non-i.i.d. issue.
We show that the calibration solution can be computed efficiently and distributedly without direct access of local data.
arXiv Detail & Related papers (2022-07-19T17:13:06Z) - Federated Stochastic Gradient Descent Begets Self-Induced Momentum [151.4322255230084]
Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems.
We show that running to the gradient descent (SGD) in such a setting can be viewed as adding a momentum-like term to the global aggregation process.
arXiv Detail & Related papers (2022-02-17T02:01:37Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.