To Talk or to Work: Flexible Communication Compression for Energy
Efficient Federated Learning over Heterogeneous Mobile Edge Devices
- URL: http://arxiv.org/abs/2012.11804v1
- Date: Tue, 22 Dec 2020 02:54:18 GMT
- Title: To Talk or to Work: Flexible Communication Compression for Energy
Efficient Federated Learning over Heterogeneous Mobile Edge Devices
- Authors: Liang Li, Dian Shi, Ronghui Hou, Hui Li, Miao Pan, Zhu Han
- Abstract summary: federated learning (FL) over massive mobile edge devices opens new horizons for numerous intelligent mobile applications.
FL imposes huge communication and computation burdens on participating devices due to periodical global synchronization and continuous local training.
We develop a convergence-guaranteed FL algorithm enabling flexible communication compression.
- Score: 78.38046945665538
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in machine learning, wireless communication, and mobile
hardware technologies promisingly enable federated learning (FL) over massive
mobile edge devices, which opens new horizons for numerous intelligent mobile
applications. Despite the potential benefits, FL imposes huge communication and
computation burdens on participating devices due to periodical global
synchronization and continuous local training, raising great challenges to
battery constrained mobile devices. In this work, we target at improving the
energy efficiency of FL over mobile edge networks to accommodate heterogeneous
participating devices without sacrificing the learning performance. To this
end, we develop a convergence-guaranteed FL algorithm enabling flexible
communication compression. Guided by the derived convergence bound, we design a
compression control scheme to balance the energy consumption of local computing
(i.e., "working") and wireless communication (i.e., "talking") from the
long-term learning perspective. In particular, the compression parameters are
elaborately chosen for FL participants adapting to their computing and
communication environments. Extensive simulations are conducted using various
datasets to validate our theoretical analysis, and the results also demonstrate
the efficacy of the proposed scheme in energy saving.
Related papers
- Federated Learning With Energy Harvesting Devices: An MDP Framework [5.852486435612777]
Federated learning (FL) requires edge devices to perform local training and exchange information with a parameter server.
A critical challenge in practical FL systems is the rapid energy depletion of battery-limited edge devices.
We apply energy harvesting technique in FL systems to extract ambient energy for continuously powering edge devices.
arXiv Detail & Related papers (2024-05-17T03:41:40Z) - WHALE-FL: Wireless and Heterogeneity Aware Latency Efficient Federated Learning over Mobile Devices via Adaptive Subnetwork Scheduling [17.029433544096257]
We develop a wireless and aware latency efficient FL (WHALE-FL) approach to accelerate FL training through adaptive subnetwork scheduling.
Our evaluation shows that, compared with peer designs, WHALE-FL effectively accelerates FL training without sacrificing learning accuracy.
arXiv Detail & Related papers (2024-05-01T22:01:40Z) - Federated Learning for 6G: Paradigms, Taxonomy, Recent Advances and
Insights [52.024964564408]
This paper examines the added-value of implementing Federated Learning throughout all levels of the protocol stack.
It presents important FL applications, addresses hot topics, provides valuable insights and explicits guidance for future research and developments.
Our concluding remarks aim to leverage the synergy between FL and future 6G, while highlighting FL's potential to revolutionize wireless industry.
arXiv Detail & Related papers (2023-12-07T20:39:57Z) - Energy and Spectrum Efficient Federated Learning via High-Precision
Over-the-Air Computation [26.499025986273832]
Federated learning (FL) enables mobile devices to collaboratively learn a shared prediction model while keeping data locally.
There are two major research challenges to practically deploy FL over mobile devices.
We propose a novel multi-bit over-the-air computation (M-AirComp) approach for spectrum-efficient aggregation of local model updates in FL.
arXiv Detail & Related papers (2022-08-15T14:47:21Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Wirelessly Powered Federated Edge Learning: Optimal Tradeoffs Between
Convergence and Power Transfer [42.30741737568212]
We propose the solution of powering devices using wireless power transfer (WPT)
This work aims at the derivation of guidelines on deploying the resultant wirelessly powered FEEL (WP-FEEL) system.
The results provide useful guidelines on WPT provisioning to provide a guaranteer on learning performance.
arXiv Detail & Related papers (2021-02-24T15:47:34Z) - To Talk or to Work: Energy Efficient Federated Learning over Mobile
Devices via the Weight Quantization and 5G Transmission Co-Design [49.95746344960136]
Federated learning (FL) is a new paradigm for large-scale learning tasks across mobile devices.
It is not clear how to establish an effective wireless network architecture to support FL over mobile devices.
We develop a wireless transmission and weight quantization co-design for energy efficient FL over heterogeneous 5G mobile devices.
arXiv Detail & Related papers (2020-12-21T01:13:44Z) - Wireless Communications for Collaborative Federated Learning [160.82696473996566]
Internet of Things (IoT) devices may not be able to transmit their collected data to a central controller for training machine learning models.
Google's seminal FL algorithm requires all devices to be directly connected with a central controller.
This paper introduces a novel FL framework, called collaborative FL (CFL), which enables edge devices to implement FL with less reliance on a central controller.
arXiv Detail & Related papers (2020-06-03T20:00:02Z) - Communication Efficient Federated Learning with Energy Awareness over
Wireless Networks [51.645564534597625]
In federated learning (FL), the parameter server and the mobile devices share the training parameters over wireless links.
We adopt the idea of SignSGD in which only the signs of the gradients are exchanged.
Two optimization problems are formulated and solved, which optimize the learning performance.
Considering that the data may be distributed across the mobile devices in a highly uneven fashion in FL, a sign-based algorithm is proposed.
arXiv Detail & Related papers (2020-04-15T21:25:13Z) - Lightwave Power Transfer for Federated Learning-based Wireless Networks [34.434349833489954]
Federated Learning (FL) has been recently presented as a new technique for training shared machine learning models in a distributed manner.
implementing FL in wireless networks may significantly reduce the lifetime of energy-constrained mobile devices.
We propose a novel approach at the physical layer based on the application of lightwave power transfer in the FL-based wireless network.
arXiv Detail & Related papers (2020-04-11T16:27:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.