Federated Learning with Graph-Based Aggregation for Traffic Forecasting
- URL: http://arxiv.org/abs/2507.09805v1
- Date: Sun, 13 Jul 2025 21:41:42 GMT
- Title: Federated Learning with Graph-Based Aggregation for Traffic Forecasting
- Authors: Audri Banik, Glaucio Haroldo Silva de Carvalho, Renata Dividino,
- Abstract summary: Federated Learning (FL) is a suitable approach for collaboratively training models without sharing raw data.<n>Standard FL methods, such as Federated Averaging (FedAvg), assume that clients are independent.<n>We propose a lightweight graph-aware FL approach that blends the simplicity of FedAvg with key ideas from graph learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In traffic prediction, the goal is to estimate traffic speed or flow in specific regions or road segments using historical data collected by devices deployed in each area. Each region or road segment can be viewed as an individual client that measures local traffic flow, making Federated Learning (FL) a suitable approach for collaboratively training models without sharing raw data. In centralized FL, a central server collects and aggregates model updates from multiple clients to build a shared model while preserving each client's data privacy. Standard FL methods, such as Federated Averaging (FedAvg), assume that clients are independent, which can limit performance in traffic prediction tasks where spatial relationships between clients are important. Federated Graph Learning methods can capture these dependencies during server-side aggregation, but they often introduce significant computational overhead. In this paper, we propose a lightweight graph-aware FL approach that blends the simplicity of FedAvg with key ideas from graph learning. Rather than training full models, our method applies basic neighbourhood aggregation principles to guide parameter updates, weighting client models based on graph connectivity. This approach captures spatial relationships effectively while remaining computationally efficient. We evaluate our method on two benchmark traffic datasets, METR-LA and PEMS-BAY, and show that it achieves competitive performance compared to standard baselines and recent graph-based federated learning techniques.
Related papers
- Personalized Federated Learning via Learning Dynamic Graphs [4.569473641235369]
We propose a novel method, Personalized Federated Learning with Graph Attention Network (pFedGAT)<n>pFedGAT captures the latent graph structure between clients and dynamically determines the importance of other clients for each client, enabling fine-grained control over the aggregation process.<n>We evaluate pFedGAT across multiple data distribution scenarios, comparing it with twelve state of the art methods on three datasets.
arXiv Detail & Related papers (2025-03-07T14:47:03Z) - Federated Learning for Traffic Flow Prediction with Synthetic Data Augmentation [10.751702067716804]
This work introduces an FL framework, referred to as FedTPS, to generate synthetic data to augment each client's local dataset by training a trajectory generation model through FL.<n>The proposed framework is evaluated on a large-scale real world ride-sharing dataset using various FL methods and Traffic Flow Prediction models.
arXiv Detail & Related papers (2024-12-11T15:25:38Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Efficient Cross-Domain Federated Learning by MixStyle Approximation [0.3277163122167433]
We introduce a privacy-preserving, resource-efficient Federated Learning concept for client adaptation in hardware-constrained environments.
Our approach includes server model pre-training on source data and subsequent fine-tuning on target data via low-end clients.
Preliminary results indicate that our method reduces computational and transmission costs while maintaining competitive performance on downstream tasks.
arXiv Detail & Related papers (2023-12-12T08:33:34Z) - One-Shot Heterogeneous Federated Learning with Local Model-Guided Diffusion Models [40.83058938096914]
FedLMG is a one-shot Federated learning method with Local Model-Guided diffusion models.<n>Clients do not need access to any foundation models but only train and upload their local models.
arXiv Detail & Related papers (2023-11-15T11:11:25Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Efficient Distribution Similarity Identification in Clustered Federated
Learning via Principal Angles Between Client Data Subspaces [59.33965805898736]
Clustered learning has been shown to produce promising results by grouping clients into clusters.
Existing FL algorithms are essentially trying to group clients together with similar distributions.
Prior FL algorithms attempt similarities indirectly during training.
arXiv Detail & Related papers (2022-09-21T17:37:54Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z) - Communication-Efficient Federated Learning via Optimal Client Sampling [20.757477553095637]
Federated learning (FL) ameliorates privacy concerns in settings where a central server coordinates learning from data distributed across many clients.
We propose a novel, simple and efficient way of updating the central model in communication-constrained settings.
We test this policy on a synthetic dataset for logistic regression and two FL benchmarks, namely, a classification task on EMNIST and a realistic language modeling task.
arXiv Detail & Related papers (2020-07-30T02:58:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.