Unlocking Dynamic Inter-Client Spatial Dependencies: A Federated Spatio-Temporal Graph Learning Method for Traffic Flow Forecasting
- URL: http://arxiv.org/abs/2511.10434v1
- Date: Fri, 14 Nov 2025 01:51:19 GMT
- Title: Unlocking Dynamic Inter-Client Spatial Dependencies: A Federated Spatio-Temporal Graph Learning Method for Traffic Flow Forecasting
- Authors: Feng Wang, Tianxiang Chen, Shuyue Wei, Qian Chu, Yi Zhang, Yifan Sun, Zhiming Zheng,
- Abstract summary: Federated Spatio-Temporal Graph with Dynamic Inter-Client Dependencies (FedSTGD) is a framework designed to model and reconstruct dynamic inter-client spatial dependencies in federated learning.<n>Experiments on four real-world datasets demonstrate that FedSTGD achieves superior performance over state-of-the-art baselines.
- Score: 13.035721364418471
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spatio-temporal graphs are powerful tools for modeling complex dependencies in traffic time series. However, the distributed nature of real-world traffic data across multiple stakeholders poses significant challenges in modeling and reconstructing inter-client spatial dependencies while adhering to data locality constraints. Existing methods primarily address static dependencies, overlooking their dynamic nature and resulting in suboptimal performance. In response, we propose Federated Spatio-Temporal Graph with Dynamic Inter-Client Dependencies (FedSTGD), a framework designed to model and reconstruct dynamic inter-client spatial dependencies in federated learning. FedSTGD incorporates a federated nonlinear computation decomposition module to approximate complex graph operations. This is complemented by a graph node embedding augmentation module, which alleviates performance degradation arising from the decomposition. These modules are coordinated through a client-server collective learning protocol, which decomposes dynamic inter-client spatial dependency learning tasks into lightweight, parallelizable subtasks. Extensive experiments on four real-world datasets demonstrate that FedSTGD achieves superior performance over state-of-the-art baselines in terms of RMSE, MAE, and MAPE, approaching that of centralized baselines. Ablation studies confirm the contribution of each module in addressing dynamic inter-client spatial dependencies, while sensitivity analysis highlights the robustness of FedSTGD to variations in hyperparameters.
Related papers
- Generative Data Transformation: From Mixed to Unified Data [57.84692191369066]
textscTaesar is a emphdata-centric framework for textbftarget-textbfal textbfregeneration.<n>It encodes cross-domain context into target sequences, enabling standard models to learn intricate dependencies without complex fusion architectures.
arXiv Detail & Related papers (2026-02-26T08:30:09Z) - Integrating Temporal and Structural Context in Graph Transformers for Relational Deep Learning [6.780460717949915]
We introduce a temporal subgraph sampler that enhances global context by retrieving nodes beyond the immediate neighborhood.<n>We also propose Graph Perceiver (RGP), a graph transformer architecture for relational deep learning.<n>RGP delivers state-of-the-art performance, offering a general and scalable solution for relational deep learning.
arXiv Detail & Related papers (2025-11-06T17:08:21Z) - Efficient High-Resolution Visual Representation Learning with State Space Model for Human Pose Estimation [60.80423207808076]
Capturing long-range dependencies while preserving high-resolution visual representations is crucial for dense prediction tasks such as human pose estimation.<n>We propose the Dynamic Visual State Space (DVSS) block, which augments visual state space models with multi-scale convolutional operations.<n>We build HRVMamba, a novel model for efficient high-resolution representation learning.
arXiv Detail & Related papers (2024-10-04T06:19:29Z) - A Plug-and-Play Method for Rare Human-Object Interactions Detection by Bridging Domain Gap [50.079224604394]
We present a novel model-agnostic framework called textbfContext-textbfEnhanced textbfFeature textbfAment (CEFA)
CEFA consists of a feature alignment module and a context enhancement module.
Our method can serve as a plug-and-play module to improve the detection performance of HOI models on rare categories.
arXiv Detail & Related papers (2024-07-31T08:42:48Z) - StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - FedASTA: Federated adaptive spatial-temporal attention for traffic flow prediction [30.346763969306398]
Mobile devices and the Internet of Things (IoT) devices nowadays generate a large amount of heterogeneous spatial-temporal data.
It remains a challenging problem to model the spatial-temporal dynamics under privacy concern.
We propose a novel Federated Adaptive spatial-temporal Attention (FedASTA) framework to model the dynamic spatial-temporal relations.
arXiv Detail & Related papers (2024-05-21T11:44:07Z) - Fully-Connected Spatial-Temporal Graph for Multivariate Time-Series Data [50.84488941336865]
We propose a novel method called Fully- Spatial-Temporal Graph Neural Network (FC-STGNN)
For graph construction, we design a decay graph to connect sensors across all timestamps based on their temporal distances.
For graph convolution, we devise FC graph convolution with a moving-pooling GNN layer to effectively capture the ST dependencies for learning effective representations.
arXiv Detail & Related papers (2023-09-11T08:44:07Z) - Attention-based Spatial-Temporal Graph Convolutional Recurrent Networks
for Traffic Forecasting [12.568905377581647]
Traffic forecasting is one of the most fundamental problems in transportation science and artificial intelligence.
Existing methods cannot accurately model both long-term and short-term temporal correlations simultaneously.
We propose a novel spatial-temporal neural network framework, which consists of a graph convolutional recurrent module (GCRN) and a global attention module.
arXiv Detail & Related papers (2023-02-25T03:37:00Z) - Spatial-Temporal Graph Convolutional Gated Recurrent Network for Traffic
Forecasting [3.9761027576939414]
We propose a novel framework for traffic forecasting, named Spatial-Temporal Graph Convolutional Gated Recurrent Network (STGCGRN)
We design an attention module to capture long-term dependency by mining periodic information in traffic data.
Experiments on four datasets demonstrate the superior performance of our model.
arXiv Detail & Related papers (2022-10-06T08:02:20Z) - Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning [112.69497636932955]
Federated learning aims to train models across different clients without the sharing of data for privacy considerations.
We study how data heterogeneity affects the representations of the globally aggregated models.
We propose sc FedDecorr, a novel method that can effectively mitigate dimensional collapse in federated learning.
arXiv Detail & Related papers (2022-10-01T09:04:17Z) - Spatial-Temporal Fusion Graph Neural Networks for Traffic Flow
Forecasting [35.072979313851235]
spatial-temporal data forecasting of traffic flow is a challenging task because of complicated spatial dependencies and dynamical trends of temporal pattern between different roads.
Existing frameworks typically utilize given spatial adjacency graph and sophisticated mechanisms for modeling spatial and temporal correlations.
This paper proposes Spatial-Temporal Fusion Graph Neural Networks (STFGNN) for traffic flow forecasting.
arXiv Detail & Related papers (2020-12-15T14:03:17Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.