Hierarchical Federated Learning in Multi-hop Cluster-Based VANETs
- URL: http://arxiv.org/abs/2401.10361v1
- Date: Thu, 18 Jan 2024 20:05:34 GMT
- Title: Hierarchical Federated Learning in Multi-hop Cluster-Based VANETs
- Authors: M. Saeid HaghighiFard and Sinem Coleri
- Abstract summary: This paper introduces a novel framework for hierarchical federated learning (HFL) over multi-hop clustering-based VANET.
The proposed method utilizes a weighted combination of the average relative speed and cosine similarity of FL model parameters as a clustering metric.
Through extensive simulations, the proposed hierarchical federated learning over clustered VANET has been demonstrated to improve accuracy and convergence time significantly.
- Score: 12.023861154677205
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The usage of federated learning (FL) in Vehicular Ad hoc Networks (VANET) has
garnered significant interest in research due to the advantages of reducing
transmission overhead and protecting user privacy by communicating local
dataset gradients instead of raw data. However, implementing FL in VANETs faces
challenges, including limited communication resources, high vehicle mobility,
and the statistical diversity of data distributions. In order to tackle these
issues, this paper introduces a novel framework for hierarchical federated
learning (HFL) over multi-hop clustering-based VANET. The proposed method
utilizes a weighted combination of the average relative speed and cosine
similarity of FL model parameters as a clustering metric to consider both data
diversity and high vehicle mobility. This metric ensures convergence with
minimum changes in cluster heads while tackling the complexities associated
with non-independent and identically distributed (non-IID) data scenarios.
Additionally, the framework includes a novel mechanism to manage seamless
transitions of cluster heads (CHs), followed by transferring the most recent FL
model parameter to the designated CH. Furthermore, the proposed approach
considers the option of merging CHs, aiming to reduce their count and,
consequently, mitigate associated overhead. Through extensive simulations, the
proposed hierarchical federated learning over clustered VANET has been
demonstrated to improve accuracy and convergence time significantly while
maintaining an acceptable level of packet overhead compared to previously
proposed clustering algorithms and non-clustered VANET.
Related papers
- Interaction-Aware Gaussian Weighting for Clustered Federated Learning [58.92159838586751]
Federated Learning (FL) emerged as a decentralized paradigm to train models while preserving privacy.
We propose a novel clustered FL method, FedGWC (Federated Gaussian Weighting Clustering), which groups clients based on their data distribution.
Our experiments on benchmark datasets show that FedGWC outperforms existing FL algorithms in cluster quality and classification accuracy.
arXiv Detail & Related papers (2025-02-05T16:33:36Z) - Decentralised Resource Sharing in TinyML: Wireless Bilayer Gossip Parallel SGD for Collaborative Learning [2.6913398550088483]
This paper proposes a novel framework, bilayer Gossip Decentralised Parallel Descent (GDD)
GDD addresses intermittent connectivity, limited communication range, and dynamic network topologies.
We evaluate the framework's performance against the Centralised Federated Learning (CFL) baseline.
arXiv Detail & Related papers (2025-01-08T20:14:07Z) - Over-the-Air Fair Federated Learning via Multi-Objective Optimization [52.295563400314094]
We propose an over-the-air fair federated learning algorithm (OTA-FFL) to train fair FL models.
Experiments demonstrate the superiority of OTA-FFL in achieving fairness and robust performance.
arXiv Detail & Related papers (2025-01-06T21:16:51Z) - Boosting the Performance of Decentralized Federated Learning via Catalyst Acceleration [66.43954501171292]
We introduce Catalyst Acceleration and propose an acceleration Decentralized Federated Learning algorithm called DFedCata.
DFedCata consists of two main components: the Moreau envelope function, which addresses parameter inconsistencies, and Nesterov's extrapolation step, which accelerates the aggregation phase.
Empirically, we demonstrate the advantages of the proposed algorithm in both convergence speed and generalization performance on CIFAR10/100 with various non-iid data distributions.
arXiv Detail & Related papers (2024-10-09T06:17:16Z) - Dual-Segment Clustering Strategy for Hierarchical Federated Learning in Heterogeneous Wireless Environments [22.35256018841889]
Non-independent and identically distributed (Non- IID) data adversely affects federated learning (FL)
This paper proposes a novel dual-segment clustering (DSC) strategy that jointly addresses communication and data heterogeneity in FL.
The convergence analysis and experimental results show that the DSC strategy can improve the convergence rate of wireless FL.
arXiv Detail & Related papers (2024-05-15T11:46:47Z) - Rethinking Clustered Federated Learning in NOMA Enhanced Wireless
Networks [60.09912912343705]
This study explores the benefits of integrating the novel clustered federated learning (CFL) approach with non-independent and identically distributed (non-IID) datasets.
A detailed theoretical analysis of the generalization gap that measures the degree of non-IID in the data distribution is presented.
Solutions to address the challenges posed by non-IID conditions are proposed with the analysis of the properties.
arXiv Detail & Related papers (2024-03-05T17:49:09Z) - Clustered Data Sharing for Non-IID Federated Learning over Wireless
Networks [39.80420645943706]
Federated Learning (FL) is a distributed machine learning approach to leverage data from the Internet of Things (IoT)
Current FL algorithms face the challenges of non-independent and identically distributed (non-IID) data, which causes high communication costs and model accuracy declines.
We propose a clustered data sharing framework which spares the partial data from cluster heads to credible associates through device-to-device (D2D) communication.
arXiv Detail & Related papers (2023-02-17T07:11:02Z) - Clustered Federated Learning based on Nonconvex Pairwise Fusion [22.82565500426576]
We introduce a novel clustered FL setting called Fusion Clustering (FPFC)
FPFC can perform partial updates at each communication allows parallel computation with variable workload.
We also propose a new practical strategy for FLFC with general losses and robustness.
arXiv Detail & Related papers (2022-11-08T13:04:56Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.