Federated Learning over a Wireless Network: Distributed User Selection
through Random Access
- URL: http://arxiv.org/abs/2307.03758v1
- Date: Fri, 7 Jul 2023 02:14:46 GMT
- Title: Federated Learning over a Wireless Network: Distributed User Selection
through Random Access
- Authors: Chen Sun, Shiyao Ma, Ce Zheng, Songtao Wu, Tao Cui, Lingjuan Lyu
- Abstract summary: This study proposes a network intrinsic approach of distributed user selection.
We manipulate the contention window (CW) size to prioritize certain users for obtaining radio resources in each round of training.
Prioritization is based on the distance between the newly trained local model and the global model of the previous round.
- Score: 23.544290667425532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: User selection has become crucial for decreasing the communication costs of
federated learning (FL) over wireless networks. However, centralized user
selection causes additional system complexity. This study proposes a network
intrinsic approach of distributed user selection that leverages the radio
resource competition mechanism in random access. Taking the carrier sensing
multiple access (CSMA) mechanism as an example of random access, we manipulate
the contention window (CW) size to prioritize certain users for obtaining radio
resources in each round of training. Training data bias is used as a target
scenario for FL with user selection. Prioritization is based on the distance
between the newly trained local model and the global model of the previous
round. To avoid excessive contribution by certain users, a counting mechanism
is used to ensure fairness. Simulations with various datasets demonstrate that
this method can rapidly achieve convergence similar to that of the centralized
user selection approach.
Related papers
- SFedCA: Credit Assignment-Based Active Client Selection Strategy for Spiking Federated Learning [15.256986486372407]
Spiking federated learning allows resource-constrained devices to train collaboratively at low power consumption without exchanging local data.
Existing spiking federated learning methods employ a random selection approach for client aggregation, assuming unbiased client participation.
We propose a credit assignment-based active client selection strategy, the SFedCA, to judiciously aggregate clients that contribute to the global sample distribution balance.
arXiv Detail & Related papers (2024-06-18T01:56:22Z) - Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning [51.560590617691005]
We investigate whether it is possible to squeeze more juice" out of each cohort than what is possible in a single communication round.
Our approach leads to up to 74% reduction in the total communication cost needed to train a FL model in the cross-device setting.
arXiv Detail & Related papers (2024-06-03T08:48:49Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - Jointly Learning from Decentralized (Federated) and Centralized Data to
Mitigate Distribution Shift [2.9965560298318468]
Federated Learning (FL) is an increasingly used paradigm where learning takes place collectively on edge devices.
Yet a distribution shift may still exist; the on-device training examples may lack for some data inputs expected to be encountered at inference time.
This paper proposes a way to mitigate this shift: selective usage of datacenter data, mixed in with FL.
arXiv Detail & Related papers (2021-11-23T20:51:24Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Throughput and Latency in the Distributed Q-Learning Random Access mMTC
Networks [0.0]
In mMTC mode, with thousands of devices trying to access network resources sporadically, the problem of random access (RA) is crucial.
In this work, we propose a distributed packet-based learning method by varying the reward from the central node that favors devices having a larger number of remaining packets to transmit.
Our numerical results indicated that the proposed distributed packet-based Q-learning method attains a much better throughput-latency trade-off than the alternative independent and collaborative techniques.
arXiv Detail & Related papers (2021-10-30T17:57:06Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.