Federated Learning Framework Coping with Hierarchical Heterogeneity in
Cooperative ITS
- URL: http://arxiv.org/abs/2204.00215v1
- Date: Fri, 1 Apr 2022 05:33:54 GMT
- Title: Federated Learning Framework Coping with Hierarchical Heterogeneity in
Cooperative ITS
- Authors: Rui Song, Liguo Zhou, Venkatnarayanan Lakshminarasimhan, Andreas
Festag, Alois Knoll
- Abstract summary: We introduce a federated learning framework coping with Hierarchical Heterogeneity (H2-Fed)
The framework exploits data from connected public traffic agents in vehicular networks without affecting user data privacy.
- Score: 10.087704332539161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce a federated learning framework coping with
Hierarchical Heterogeneity (H2-Fed), which can notably enhance the conventional
pre-trained deep learning model. The framework exploits data from connected
public traffic agents in vehicular networks without affecting user data
privacy. By coordinating existing traffic infrastructure, including roadside
units and road traffic clouds, the model parameters are efficiently
disseminated by vehicular communications and hierarchically aggregated.
Considering the individual heterogeneity of data distribution, computational
and communication capabilities across traffic agents and roadside units, we
employ a novel method that addresses the heterogeneity of different aggregation
layers of the framework architecture, i.e., aggregation in layers of roadside
units and cloud. The experiment results indicate that our method can well
balance the learning accuracy and stability according to the knowledge of
heterogeneity in current communication networks. Compared to other baseline
approaches, the evaluation on a Non-IID MNIST dataset shows that our framework
is more general and capable especially in application scenarios with low
communication quality. Even when 80% of the agents are timely disconnected, the
pre-trained deep learning model can still be forced to converge stably and its
accuracy can be enhanced from 68% to 93% after convergence.
Related papers
- On the Federated Learning Framework for Cooperative Perception [28.720571541022245]
Federated learning offers a promising solution by enabling data privacy-preserving collaborative enhancements in perception, decision-making, and planning among connected and autonomous vehicles.
This study introduces a specialized federated learning framework for CP, termed the federated dynamic weighted aggregation (FedDWA) algorithm.
This framework employs dynamic client weighting to direct model convergence and integrates a novel loss function that utilizes Kullback-Leibler divergence (KLD) to counteract detrimental effects of non-independently and identically distributed (Non-IID) and unbalanced data.
arXiv Detail & Related papers (2024-04-26T04:34:45Z) - Generalizing Differentially Private Decentralized Deep Learning with Multi-Agent Consensus [11.414398732656839]
We propose a framework that embeds differential privacy into decentralized deep learning and secures each agent's local dataset during and after cooperative training.
We prove convergence guarantees for algorithms derived from this framework and demonstrate its practical utility when applied to subgradient and ADMM decentralized approaches.
arXiv Detail & Related papers (2023-06-24T07:46:00Z) - SARN: Structurally-Aware Recurrent Network for Spatio-Temporal Disaggregation [8.636014676778682]
Open data is frequently released spatially aggregated, usually to comply with privacy policies. But coarse, heterogeneous aggregations complicate coherent learning and integration for downstream AI/ML systems.
We propose an overarching model named Structurally-Aware Recurrent Network (SARN), which integrates structurally-aware spatial attention layers into the Gated Recurrent Unit (GRU) model.
For scenarios with limited historical training data, we show that a model pre-trained on one city variable can be fine-tuned for another city variable using only a few hundred samples.
arXiv Detail & Related papers (2023-06-09T21:01:29Z) - FedPNN: One-shot Federated Classification via Evolving Clustering Method
and Probabilistic Neural Network hybrid [4.241208172557663]
We propose a two-stage federated learning approach toward the objective of privacy protection.
In the first stage, the synthetic dataset is generated by employing two different distributions as noise.
In the second stage, the Federated Probabilistic Neural Network (FedPNN) is developed and employed for building globally shared classification model.
arXiv Detail & Related papers (2023-04-09T03:23:37Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - FedILC: Weighted Geometric Mean and Invariant Gradient Covariance for
Federated Learning on Non-IID Data [69.0785021613868]
Federated learning is a distributed machine learning approach which enables a shared server model to learn by aggregating the locally-computed parameter updates with the training data from spatially-distributed client silos.
We propose the Federated Invariant Learning Consistency (FedILC) approach, which leverages the gradient covariance and the geometric mean of Hessians to capture both inter-silo and intra-silo consistencies.
This is relevant to various fields such as medical healthcare, computer vision, and the Internet of Things (IoT)
arXiv Detail & Related papers (2022-05-19T03:32:03Z) - Semi-asynchronous Hierarchical Federated Learning for Cooperative
Intelligent Transportation Systems [10.257042901204528]
Cooperative Intelligent Transport System (C-ITS) is a promising network to provide safety, efficiency, sustainability, and comfortable services for automated vehicles and road infrastructures.
The components of C-ITS usually generate large amounts of data, which makes it difficult to explore data science.
We propose a novel Semi-a synchronous Federated Learning (SHFL) framework for C-ITS that enables elastic edge to cloud model aggregation from data sensing.
arXiv Detail & Related papers (2021-10-18T07:44:34Z) - Rethinking Architecture Design for Tackling Data Heterogeneity in
Federated Learning [53.73083199055093]
We show that attention-based architectures (e.g., Transformers) are fairly robust to distribution shifts.
Our experiments show that replacing convolutional networks with Transformers can greatly reduce catastrophic forgetting of previous devices.
arXiv Detail & Related papers (2021-06-10T21:04:18Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.