Multi-Tier Client Selection for Mobile Federated Learning Networks
- URL: http://arxiv.org/abs/2305.06865v1
- Date: Thu, 11 May 2023 15:06:08 GMT
- Title: Multi-Tier Client Selection for Mobile Federated Learning Networks
- Authors: Yulan Gao, Yansong Zhao, and Han Yu
- Abstract summary: We propose a first-of-its-kind underlineSocially-aware underlineFederated underlineClient underlineSelection (SocFedCS) approach to minimize costs and train high-quality FL models.
SocFedCS enriches the candidate FL client pool by enabling data owners to propagate FL task information through their local networks of trust.
- Score: 13.809694368802827
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL), which addresses data privacy issues by training
models on resource-constrained mobile devices in a distributed manner, has
attracted significant research attention. However, the problem of optimizing FL
client selection in mobile federated learning networks (MFLNs), where devices
move in and out of each others' coverage and no FL server knows all the data
owners, remains open. To bridge this gap, we propose a first-of-its-kind
\underline{Soc}ially-aware \underline{Fed}erated \underline{C}lient
\underline{S}election (SocFedCS) approach to minimize costs and train
high-quality FL models. SocFedCS enriches the candidate FL client pool by
enabling data owners to propagate FL task information through their local
networks of trust, even as devices are moving into and out of each others'
coverage. Based on Lyapunov optimization, we first transform this time-coupled
problem into a step-by-step optimization problem. Then, we design a method
based on alternating minimization and self-adaptive global best harmony search
to solve this mixed-integer optimization problem. Extensive experiments
comparing SocFedCS against five state-of-the-art approaches based on four
real-world multimedia datasets demonstrate that it achieves 2.06\% higher test
accuracy and 12.24\% lower cost on average than the best-performing baseline.
Related papers
- Smart Sampling: Helping from Friendly Neighbors for Decentralized Federated Learning [10.917048408073846]
We introduce AFIND+, a simple yet efficient algorithm for sampling and aggregating neighbors in Decentralized FL (DFL)
AFIND+ identifies helpful neighbors, adaptively adjusts the number of selected neighbors, and strategically aggregates the sampled neighbors' models.
Numerical results on real-world datasets demonstrate that AFIND+ outperforms other sampling algorithms in DFL.
arXiv Detail & Related papers (2024-07-05T12:10:54Z) - Learner Referral for Cost-Effective Federated Learning Over Hierarchical
IoT Networks [21.76836812021954]
This paper aided federated selection (LRef-FedCS), communications resource, and local model accuracy (LMAO) methods.
Our proposed LRef-FedCS approach could achieve a good balance between high global accuracy and reducing cost.
arXiv Detail & Related papers (2023-07-19T13:33:43Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Federated Multi-Task Learning under a Mixture of Distributions [10.00087964926414]
Federated Learning (FL) is a framework for on-device collaborative training of machine learning models.
First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client.
We study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions.
arXiv Detail & Related papers (2021-08-23T15:47:53Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z) - FedMix: Approximation of Mixup under Mean Augmented Federated Learning [60.503258658382]
Federated learning (FL) allows edge devices to collectively learn a model without directly sharing data within each device.
Current state-of-the-art algorithms suffer from performance degradation as the heterogeneity of local data across clients increases.
We propose a new augmentation algorithm, named FedMix, which is inspired by a phenomenal yet simple data augmentation method, Mixup.
arXiv Detail & Related papers (2021-07-01T06:14:51Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z) - FedProf: Optimizing Federated Learning with Dynamic Data Profiling [9.74942069718191]
Federated Learning (FL) has shown great potential as a privacy-preserving solution to learning from decentralized data.
A large proportion of the clients are probably in possession of only low-quality data that are biased, noisy or even irrelevant.
We propose a novel approach to optimizing FL under such circumstances without breaching data privacy.
arXiv Detail & Related papers (2021-02-02T20:10:14Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.