A Contribution-based Device Selection Scheme in Federated Learning
- URL: http://arxiv.org/abs/2203.05369v1
- Date: Thu, 10 Mar 2022 13:40:48 GMT
- Title: A Contribution-based Device Selection Scheme in Federated Learning
- Authors: Shashi Raj Pandey, Lam D. Nguyen, and Petar Popovski
- Abstract summary: In a Federated Learning (FL) setup, a number of devices contribute to the training of a common model.
We present a method for selecting the devices that provide updates in order to achieve improved generalization, fast convergence, and better device-level performance.
- Score: 31.77382335761709
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In a Federated Learning (FL) setup, a number of devices contribute to the
training of a common model. We present a method for selecting the devices that
provide updates in order to achieve improved generalization, fast convergence,
and better device-level performance. We formulate a min-max optimization
problem and decompose it into a primal-dual setup, where the duality gap is
used to quantify the device-level performance. Our strategy combines
\emph{exploration} of data freshness through a random device selection with
\emph{exploitation} through simplified estimates of device contributions. This
improves the performance of the trained model both in terms of generalization
and personalization. A modified Truncated Monte-Carlo (TMC) method is applied
during the exploitation phase to estimate the device's contribution and lower
the communication overhead. The experimental results show that the proposed
approach has a competitive performance, with lower communication overhead and
competitive personalization performance against the baseline schemes.
Related papers
- Age-Based Device Selection and Transmit Power Optimization in Over-the-Air Federated Learning [44.04728314657621]
Over-the-air federated learning (FL) has attracted significant attention for its ability to enhance communication efficiency.
In particular, neglecting straggler devices in FL can lead to a decline in the fairness of model updates and amplify the global model's bias toward certain devices' data.
We propose a joint device selection and transmit power optimization framework that ensures the appropriate participation of straggler devices, maintains efficient training performance, and guarantees timely updates.
arXiv Detail & Related papers (2025-01-03T14:27:13Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - On Second-order Optimization Methods for Federated Learning [59.787198516188425]
We evaluate the performance of several second-order distributed methods with local steps in the federated learning setting.
We propose a novel variant that uses second-order local information for updates and a global line search to counteract the resulting local specificity.
arXiv Detail & Related papers (2021-09-06T12:04:08Z) - Contrastive Prototype Learning with Augmented Embeddings for Few-Shot
Learning [58.2091760793799]
We propose a novel contrastive prototype learning with augmented embeddings (CPLAE) model.
With a class prototype as an anchor, CPL aims to pull the query samples of the same class closer and those of different classes further away.
Extensive experiments on several benchmarks demonstrate that our proposed CPLAE achieves new state-of-the-art.
arXiv Detail & Related papers (2021-01-23T13:22:44Z) - Federated Learning via Intelligent Reflecting Surface [30.935389187215474]
Over-the-air computation algorithm (AirComp) based learning (FL) is capable of achieving fast model aggregation by exploiting the waveform superposition property of multiple access channels.
In this paper, we propose a two-step optimization framework to achieve fast yet reliable model aggregation for AirComp-based FL.
Simulation results will demonstrate that our proposed framework and the deployment of an IRS can achieve a lower training loss and higher FL prediction accuracy than the baseline algorithms.
arXiv Detail & Related papers (2020-11-10T11:29:57Z) - Effective Federated Adaptive Gradient Methods with Non-IID Decentralized
Data [18.678289386084113]
Federated learning allows devices to collaboratively learn a model without data sharing.
We propose Federated AGMs, which employ both the firstorder and second-ordercalibratea.
We compare schemes of calibration for federated learning, including standard Adam byepsilon.
arXiv Detail & Related papers (2020-09-14T16:37:44Z) - Federated Transfer Learning with Dynamic Gradient Aggregation [27.42998421786922]
This paper introduces a Federated Learning (FL) simulation platform for Acoustic Model training.
The proposed FL platform can support different tasks based on the adopted modular design.
It is shown to outperform the golden standard of distributed training in both convergence speed and overall model performance.
arXiv Detail & Related papers (2020-08-06T04:29:01Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.