Computation and Communication Efficient Lightweighting Vertical Federated Learning for Smart Building IoT
- URL: http://arxiv.org/abs/2404.00466v2
- Date: Mon, 14 Apr 2025 12:22:21 GMT
- Title: Computation and Communication Efficient Lightweighting Vertical Federated Learning for Smart Building IoT
- Authors: Heqiang Wang, Xiang Liu, Yucheng Liu, Jia Zhou, Weihong Yang, Xiaoxiong Zhong,
- Abstract summary: IoT devices are evolving beyond basic data collection and control to actively participate in deep learning tasks.<n>We propose a Lightweight Vertical Federated Learning framework that jointly optimize computational and communication efficiency.<n> Experimental results on an image classification task demonstrate that LVFL effectively mitigates resource demands while maintaining competitive learning performance.
- Score: 6.690593323665202
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the increasing number and enhanced capabilities of IoT devices in smart buildings, these devices are evolving beyond basic data collection and control to actively participate in deep learning tasks. Federated Learning (FL), as a decentralized learning paradigm, is well-suited for such scenarios. However, the limited computational and communication resources of IoT devices present significant challenges. While existing research has extensively explored efficiency improvements in Horizontal FL, these techniques cannot be directly applied to Vertical FL due to fundamental differences in data partitioning and model structure. To address this gap, we propose a Lightweight Vertical Federated Learning (LVFL) framework that jointly optimizes computational and communication efficiency. Our approach introduces two distinct lightweighting strategies: one for reducing the complexity of the feature model to improve local computation, and another for compressing feature embeddings to reduce communication overhead. Furthermore, we derive a convergence bound for the proposed LVFL algorithm that explicitly incorporates both computation and communication lightweighting ratios. Experimental results on an image classification task demonstrate that LVFL effectively mitigates resource demands while maintaining competitive learning performance.
Related papers
- Deploying Large AI Models on Resource-Limited Devices with Split Federated Learning [39.73152182572741]
This paper proposes a novel framework, named Quantized Split Federated Fine-Tuning Large AI Model (SFLAM)
By partitioning the training load between edge devices and servers, SFLAM can facilitate the operation of large models on devices.
SFLAM incorporates quantization management, power control, and bandwidth allocation strategies to enhance training efficiency.
arXiv Detail & Related papers (2025-04-12T07:55:11Z) - Heterogeneity-Aware Resource Allocation and Topology Design for Hierarchical Federated Edge Learning [9.900317349372383]
Federated Learning (FL) provides a privacy-preserving framework for training machine learning models on mobile edge devices.
Traditional FL algorithms, e.g., FedAvg, impose a heavy communication workload on these devices.
We propose a two-tier HFEL system, where edge devices are connected to edge servers and edge servers are interconnected through peer-to-peer (P2P) edge backhauls.
Our goal is to enhance the training efficiency of the HFEL system through strategic resource allocation and topology design.
arXiv Detail & Related papers (2024-09-29T01:48:04Z) - SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Exploring the Practicality of Federated Learning: A Survey Towards the Communication Perspective [1.088537320059347]
Federated Learning (FL) is a promising paradigm that offers significant advancements in privacy-preserving, decentralized machine learning.
However, the practical deployment of FL systems faces a significant bottleneck: the communication overhead.
This survey investigates various strategies and advancements made in communication-efficient FL.
arXiv Detail & Related papers (2024-05-30T19:21:33Z) - Towards Scalable Wireless Federated Learning: Challenges and Solutions [40.68297639420033]
federated learning (FL) emerges as an effective distributed machine learning framework.
We discuss the challenges and solutions of achieving scalable wireless FL from the perspectives of both network design and resource orchestration.
arXiv Detail & Related papers (2023-10-08T08:55:03Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Efficient Adaptive Federated Optimization of Federated Learning for IoT [0.0]
This paper proposes a novel efficient adaptive federated optimization (EAFO) algorithm to improve efficiency of Federated Learning (FL)
FL is a distributed privacy-preserving learning framework that enables IoT devices to train global model through sharing model parameters.
Experiment results show that the proposed EAFO can achieve higher accuracies faster.
arXiv Detail & Related papers (2022-06-23T01:49:12Z) - Resource Allocation for Compression-aided Federated Learning with High
Distortion Rate [3.7530276852356645]
We formulate an optimization-aided FL problem between the distortion rate, number of participating IoT devices, and convergence rate.
By actively controlling participating IoT devices, we can avoid the training divergence of compression-aided FL while maintaining the communication efficiency.
arXiv Detail & Related papers (2022-06-02T05:00:37Z) - Communication-Efficient Consensus Mechanism for Federated Reinforcement
Learning [20.891460617583302]
We show that FL can improve the policy performance of IRL in terms of training efficiency and stability.
To reach a good balance between improving the model's convergence performance and reducing the required communication and computation overheads, this paper proposes a system utility function.
arXiv Detail & Related papers (2022-01-30T04:04:24Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - AsySQN: Faster Vertical Federated Learning Algorithms with Better
Computation Resource Utilization [159.75564904944707]
We propose an asynchronous quasi-Newton (AsySQN) framework for vertical federated learning (VFL)
The proposed algorithms make descent steps scaled by approximate without calculating the inverse Hessian matrix explicitly.
We show that the adopted asynchronous computation can make better use of the computation resource.
arXiv Detail & Related papers (2021-09-26T07:56:10Z) - 1-Bit Compressive Sensing for Efficient Federated Learning Over the Air [32.14738452396869]
This paper develops and analyzes a communication-efficient scheme for learning (FL) over the air, which incorporates 1-bit sensing (CS) into analog aggregation transmissions.
For scalable computing, we develop an efficient implementation that is suitable for large-scale networks.
Simulation results show that our proposed 1-bit CS based FL over the air achieves comparable performance to the ideal case.
arXiv Detail & Related papers (2021-03-30T03:50:31Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.