Efficient Adaptive Federated Optimization of Federated Learning for IoT
- URL: http://arxiv.org/abs/2206.11448v1
- Date: Thu, 23 Jun 2022 01:49:12 GMT
- Title: Efficient Adaptive Federated Optimization of Federated Learning for IoT
- Authors: Zunming Chen, Hongyan Cui, Ensen Wu, Yu Xi
- Abstract summary: This paper proposes a novel efficient adaptive federated optimization (EAFO) algorithm to improve efficiency of Federated Learning (FL)
FL is a distributed privacy-preserving learning framework that enables IoT devices to train global model through sharing model parameters.
Experiment results show that the proposed EAFO can achieve higher accuracies faster.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The proliferation of the Internet of Things (IoT) and widespread use of
devices with sensing, computing, and communication capabilities have motivated
intelligent applications empowered by artificial intelligence. The classical
artificial intelligence algorithms require centralized data collection and
processing which are challenging in realistic intelligent IoT applications due
to growing data privacy concerns and distributed datasets. Federated Learning
(FL) has emerged as a distributed privacy-preserving learning framework that
enables IoT devices to train global model through sharing model parameters.
However, inefficiency due to frequent parameters transmissions significantly
reduce FL performance. Existing acceleration algorithms consist of two main
type including local update considering trade-offs between communication and
computation and parameter compression considering trade-offs between
communication and precision. Jointly considering these two trade-offs and
adaptively balancing their impacts on convergence have remained unresolved. To
solve the problem, this paper proposes a novel efficient adaptive federated
optimization (EAFO) algorithm to improve efficiency of FL, which minimizes the
learning error via jointly considering two variables including local update and
parameter compression and enables FL to adaptively adjust the two variables and
balance trade-offs among computation, communication and precision. The
experiment results illustrate that comparing with state-of-the-art algorithms,
the proposed EAFO can achieve higher accuracies faster.
Related papers
- Adaptive Hybrid Model Pruning in Federated Learning through Loss Exploration [17.589308358508863]
We introduce AutoFLIP, an innovative approach that utilizes a federated loss exploration phase to drive adaptive hybrid pruning.
We show that AutoFLIP not only efficiently accelerates global convergence, but also achieves superior accuracy and robustness compared to traditional methods.
arXiv Detail & Related papers (2024-05-16T17:27:41Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Over-the-Air Federated Learning via Second-Order Optimization [37.594140209854906]
Federated learning (FL) could result in task-oriented data traffic flows over wireless networks with limited radio resources.
We propose a novel over-the-air second-order federated optimization algorithm to simultaneously reduce the communication rounds and enable low-latency global model aggregation.
arXiv Detail & Related papers (2022-03-29T12:39:23Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - 1-Bit Compressive Sensing for Efficient Federated Learning Over the Air [32.14738452396869]
This paper develops and analyzes a communication-efficient scheme for learning (FL) over the air, which incorporates 1-bit sensing (CS) into analog aggregation transmissions.
For scalable computing, we develop an efficient implementation that is suitable for large-scale networks.
Simulation results show that our proposed 1-bit CS based FL over the air achieves comparable performance to the ideal case.
arXiv Detail & Related papers (2021-03-30T03:50:31Z) - Optimization-driven Machine Learning for Intelligent Reflecting Surfaces
Assisted Wireless Networks [82.33619654835348]
Intelligent surface (IRS) has been employed to reshape the wireless channels by controlling individual scattering elements' phase shifts.
Due to the large size of scattering elements, the passive beamforming is typically challenged by the high computational complexity.
In this article, we focus on machine learning (ML) approaches for performance in IRS-assisted wireless networks.
arXiv Detail & Related papers (2020-08-29T08:39:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.