Learning to Transmit with Provable Guarantees in Wireless Federated
Learning
- URL: http://arxiv.org/abs/2304.09329v2
- Date: Tue, 12 Dec 2023 01:22:00 GMT
- Title: Learning to Transmit with Provable Guarantees in Wireless Federated
Learning
- Authors: Boning Li, Jake Perazzone, Ananthram Swami, Santiago Segarra
- Abstract summary: We propose a novel data-driven approach to allocate transmit power for federated learning (FL) over interference-limited wireless networks.
The proposed method is useful in challenging scenarios where the wireless channel is changing during the FL training process.
Ultimately, our goal is to improve the accuracy and efficiency of the global FL model being trained.
- Score: 40.11488246920875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel data-driven approach to allocate transmit power for
federated learning (FL) over interference-limited wireless networks. The
proposed method is useful in challenging scenarios where the wireless channel
is changing during the FL training process and when the training data are not
independent and identically distributed (non-i.i.d.) on the local devices.
Intuitively, the power policy is designed to optimize the information received
at the server end during the FL process under communication constraints.
Ultimately, our goal is to improve the accuracy and efficiency of the global FL
model being trained. The proposed power allocation policy is parameterized
using graph convolutional networks (GCNs), and the associated constrained
optimization problem is solved through a primal-dual (PD) algorithm.
Theoretically, we show that the formulated problem has a zero duality gap and,
once the power policy is parameterized, optimality depends on how expressive
this parameterization is. Numerically, we demonstrate that the proposed method
outperforms existing baselines under different wireless channel settings and
varying degrees of data heterogeneity.
Related papers
- Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Gradient Sparsification for Efficient Wireless Federated Learning with
Differential Privacy [25.763777765222358]
Federated learning (FL) enables distributed clients to collaboratively train a machine learning model without sharing raw data with each other.
As the model size grows, the training latency due to limited transmission bandwidth and private information degrades while using differential privacy (DP) protection.
We propose sparsification empowered FL framework wireless channels, in over to improve training efficiency without sacrificing convergence performance.
arXiv Detail & Related papers (2023-04-09T05:21:15Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Federated Learning for Energy-limited Wireless Networks: A Partial Model
Aggregation Approach [79.59560136273917]
limited communication resources, bandwidth and energy, and data heterogeneity across devices are main bottlenecks for federated learning (FL)
We first devise a novel FL framework with partial model aggregation (PMA)
The proposed PMA-FL improves 2.72% and 11.6% accuracy on two typical heterogeneous datasets.
arXiv Detail & Related papers (2022-04-20T19:09:52Z) - Over-the-Air Federated Learning via Second-Order Optimization [37.594140209854906]
Federated learning (FL) could result in task-oriented data traffic flows over wireless networks with limited radio resources.
We propose a novel over-the-air second-order federated optimization algorithm to simultaneously reduce the communication rounds and enable low-latency global model aggregation.
arXiv Detail & Related papers (2022-03-29T12:39:23Z) - Power Allocation for Wireless Federated Learning using Graph Neural
Networks [28.735019205296776]
We propose a data-driven approach for power allocation in the context of federated learning (FL) over interference-limited wireless networks.
The power policy is designed to maximize the transmitted information during the FL process under communication constraints.
arXiv Detail & Related papers (2021-11-15T00:54:52Z) - Offline Contextual Bandits for Wireless Network Optimization [107.24086150482843]
In this paper, we investigate how to learn policies that can automatically adjust the configuration parameters of every cell in the network in response to the changes in the user demand.
Our solution combines existent methods for offline learning and adapts them in a principled way to overcome crucial challenges arising in this context.
arXiv Detail & Related papers (2021-11-11T11:31:20Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Joint Optimization of Communications and Federated Learning Over the Air [32.14738452396869]
Federated learning (FL) is an attractive paradigm for making use of rich distributed data while protecting data privacy.
In this paper, we study joint optimization of communications and FL based on analog aggregation transmission in realistic wireless networks.
arXiv Detail & Related papers (2021-04-08T03:38:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.