To Talk or to Work: Energy Efficient Federated Learning over Mobile
Devices via the Weight Quantization and 5G Transmission Co-Design
- URL: http://arxiv.org/abs/2012.11070v1
- Date: Mon, 21 Dec 2020 01:13:44 GMT
- Title: To Talk or to Work: Energy Efficient Federated Learning over Mobile
Devices via the Weight Quantization and 5G Transmission Co-Design
- Authors: Rui Chen, Liang Li, Kaiping Xue, Chi Zhang, Lingjia Liu, Miao Pan
- Abstract summary: Federated learning (FL) is a new paradigm for large-scale learning tasks across mobile devices.
It is not clear how to establish an effective wireless network architecture to support FL over mobile devices.
We develop a wireless transmission and weight quantization co-design for energy efficient FL over heterogeneous 5G mobile devices.
- Score: 49.95746344960136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a new paradigm for large-scale learning tasks
across mobile devices. However, practical FL deployment over resource
constrained mobile devices confronts multiple challenges. For example, it is
not clear how to establish an effective wireless network architecture to
support FL over mobile devices. Besides, as modern machine learning models are
more and more complex, the local on-device training/intermediate model update
in FL is becoming too power hungry/radio resource intensive for mobile devices
to afford. To address those challenges, in this paper, we try to bridge another
recent surging technology, 5G, with FL, and develop a wireless transmission and
weight quantization co-design for energy efficient FL over heterogeneous 5G
mobile devices. Briefly, the 5G featured high data rate helps to relieve the
severe communication concern, and the multi-access edge computing (MEC) in 5G
provides a perfect network architecture to support FL. Under MEC architecture,
we develop flexible weight quantization schemes to facilitate the on-device
local training over heterogeneous 5G mobile devices. Observed the fact that the
energy consumption of local computing is comparable to that of the model
updates via 5G transmissions, we formulate the energy efficient FL problem into
a mixed-integer programming problem to elaborately determine the quantization
strategies and allocate the wireless bandwidth for heterogeneous 5G mobile
devices. The goal is to minimize the overall FL energy consumption (computing +
5G transmissions) over 5G mobile devices while guaranteeing learning
performance and training latency. Generalized Benders' Decomposition is applied
to develop feasible solutions and extensive simulations are conducted to verify
the effectiveness of the proposed scheme.
Related papers
- WHALE-FL: Wireless and Heterogeneity Aware Latency Efficient Federated Learning over Mobile Devices via Adaptive Subnetwork Scheduling [17.029433544096257]
We develop a wireless and aware latency efficient FL (WHALE-FL) approach to accelerate FL training through adaptive subnetwork scheduling.
Our evaluation shows that, compared with peer designs, WHALE-FL effectively accelerates FL training without sacrificing learning accuracy.
arXiv Detail & Related papers (2024-05-01T22:01:40Z) - Federated Learning for 6G: Paradigms, Taxonomy, Recent Advances and
Insights [52.024964564408]
This paper examines the added-value of implementing Federated Learning throughout all levels of the protocol stack.
It presents important FL applications, addresses hot topics, provides valuable insights and explicits guidance for future research and developments.
Our concluding remarks aim to leverage the synergy between FL and future 6G, while highlighting FL's potential to revolutionize wireless industry.
arXiv Detail & Related papers (2023-12-07T20:39:57Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Energy and Spectrum Efficient Federated Learning via High-Precision
Over-the-Air Computation [26.499025986273832]
Federated learning (FL) enables mobile devices to collaboratively learn a shared prediction model while keeping data locally.
There are two major research challenges to practically deploy FL over mobile devices.
We propose a novel multi-bit over-the-air computation (M-AirComp) approach for spectrum-efficient aggregation of local model updates in FL.
arXiv Detail & Related papers (2022-08-15T14:47:21Z) - A Practical Cross-Device Federated Learning Framework over 5G Networks [47.72735882790756]
The concept of federated learning (FL) was first proposed by Google in 2016.
We propose a novel cross-device federated learning framework using anonymous communication technology and ring signature.
In addition, our scheme implements a contribution-based incentive mechanism to encourage mobile users to participate in FL.
arXiv Detail & Related papers (2022-04-18T02:31:06Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Towards Energy Efficient Federated Learning over 5G+ Mobile Devices [26.970421001190896]
federated learning (FL) over 5G+ mobile devices pushes AI functions to mobile devices and initiates a new era of on-device AI applications.
Huge energy consumption is one of the most significant obstacles restricting the development of FL over battery-constrained 5G+ mobile devices.
We make a trade-off between energy consumption for "working" (i.e., local computing) and that for "talking" (i.e., wireless communications) in order to boost the overall energy efficiency.
arXiv Detail & Related papers (2021-01-13T04:13:54Z) - To Talk or to Work: Flexible Communication Compression for Energy
Efficient Federated Learning over Heterogeneous Mobile Edge Devices [78.38046945665538]
federated learning (FL) over massive mobile edge devices opens new horizons for numerous intelligent mobile applications.
FL imposes huge communication and computation burdens on participating devices due to periodical global synchronization and continuous local training.
We develop a convergence-guaranteed FL algorithm enabling flexible communication compression.
arXiv Detail & Related papers (2020-12-22T02:54:18Z) - Lightwave Power Transfer for Federated Learning-based Wireless Networks [34.434349833489954]
Federated Learning (FL) has been recently presented as a new technique for training shared machine learning models in a distributed manner.
implementing FL in wireless networks may significantly reduce the lifetime of energy-constrained mobile devices.
We propose a novel approach at the physical layer based on the application of lightwave power transfer in the FL-based wireless network.
arXiv Detail & Related papers (2020-04-11T16:27:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.