SyncFed: Time-Aware Federated Learning through Explicit Timestamping and Synchronization
- URL: http://arxiv.org/abs/2506.09660v1
- Date: Wed, 11 Jun 2025 12:29:20 GMT
- Title: SyncFed: Time-Aware Federated Learning through Explicit Timestamping and Synchronization
- Authors: Baran Can Gül, Stefanos Tziampazis, Nasser Jazdi, Michael Weyrich,
- Abstract summary: emphSyncFed is a time-aware FL framework that employs explicit synchronization and timestamping to establish a common temporal reference across the system.<n>Staleness is quantified numerically based on exchanged timestamps under the Network Time Protocol (NTP), enabling the server to reason about the relative freshness of client updates.<n>Our empirical evaluation on a geographically distributed testbed shows that, under emphSyncFed, the global model evolves within a stable temporal context, resulting in improved accuracy and information freshness.
- Score: 0.6749750044497732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As Federated Learning (FL) expands to larger and more distributed environments, consistency in training is challenged by network-induced delays, clock unsynchronicity, and variability in client updates. This combination of factors may contribute to misaligned contributions that undermine model reliability and convergence. Existing methods like staleness-aware aggregation and model versioning address lagging updates heuristically, yet lack mechanisms to quantify staleness, especially in latency-sensitive and cross-regional deployments. In light of these considerations, we introduce \emph{SyncFed}, a time-aware FL framework that employs explicit synchronization and timestamping to establish a common temporal reference across the system. Staleness is quantified numerically based on exchanged timestamps under the Network Time Protocol (NTP), enabling the server to reason about the relative freshness of client updates and apply temporally informed weighting during aggregation. Our empirical evaluation on a geographically distributed testbed shows that, under \emph{SyncFed}, the global model evolves within a stable temporal context, resulting in improved accuracy and information freshness compared to round-based baselines devoid of temporal semantics.
Related papers
- Multivariate Long-term Time Series Forecasting with Fourier Neural Filter [55.09326865401653]
We introduce FNF as the backbone and DBD as architecture to provide excellent learning capabilities and optimal learning pathways for spatial-temporal modeling.<n>We show that FNF unifies local time-domain and global frequency-domain information processing within a single backbone that extends naturally to spatial modeling.
arXiv Detail & Related papers (2025-06-10T18:40:20Z) - Adaptive Deadline and Batch Layered Synchronized Federated Learning [66.93447103966439]
Federated learning (FL) enables collaborative model training across distributed edge devices while preserving data privacy, and typically operates in a round-based synchronous manner.<n>We propose ADEL-FL, a novel framework that jointly optimize per-round deadlines and user-specific batch sizes for layer-wise aggregation.
arXiv Detail & Related papers (2025-05-29T19:59:18Z) - FRAIN to Train: A Fast-and-Reliable Solution for Decentralized Federated Learning [1.1510009152620668]
asynchronous learning (FL) enables collaborative model training across distributed clients while preserving data locality.<n>We introduce Fast-and-Reliable AI Network, FRAIN, a new FL method that mitigates these limitations by incorporating two key ideas.<n>Experiments with a CNN image-classification model and a Transformer-based language model demonstrate that FRAIN achieves more stable and robust convergence than FedAvg, FedAsync, and BRAIN.
arXiv Detail & Related papers (2025-05-07T08:20:23Z) - STRGCN: Capturing Asynchronous Spatio-Temporal Dependencies for Irregular Multivariate Time Series Forecasting [14.156419219696252]
STRGCN captures the complex interdependencies in IMTS by representing them as a fully connected graph.<n>Experiments on four public datasets demonstrate that STRGCN achieves state-of-the-art accuracy, competitive memory usage and training speed.
arXiv Detail & Related papers (2025-05-07T06:41:33Z) - DRAN: A Distribution and Relation Adaptive Network for Spatio-temporal Forecasting [19.064628208136273]
We propose a Relation and Adaptive Network Distribution (DRAN) capable of adapting to and distribution changes over time.<n>We show that our SFL efficiently preserves spatial relationships across various temporal operations.<n>Our approach outperforms state of the art methods in weather prediction and traffic forecasting flows.
arXiv Detail & Related papers (2025-04-02T09:18:43Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - FAVANO: Federated AVeraging with Asynchronous NOdes [14.412305295989444]
We propose a novel centralized Asynchronous Federated Learning (FL) framework, FAVANO, for training Deep Neural Networks (DNNs) in resource constrained environments.
arXiv Detail & Related papers (2023-05-25T14:30:17Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Gated Recurrent Neural Networks with Weighted Time-Delay Feedback [55.596897987498174]
We present a novel approach to modeling long-term dependencies in sequential data by introducing a gated recurrent unit (GRU) with a weighted time-delay feedback mechanism.<n>Our proposed model, named $tau$-GRU, is a discretized version of a continuous-time formulation of a recurrent unit, where the dynamics are governed by delay differential equations (DDEs)
arXiv Detail & Related papers (2022-12-01T02:26:34Z) - AsyncFedED: Asynchronous Federated Learning with Euclidean Distance
based Adaptive Weight Aggregation [17.57059932879715]
In an asynchronous learning framework, a server updates the global model once it receives an update from a client instead of waiting for all the updates to arrive as in the setting.
A proposed adaptive weight aggregation algorithm, referred to as AsyncFedED, is presented.
arXiv Detail & Related papers (2022-05-27T07:18:11Z) - Device Scheduling and Update Aggregation Policies for Asynchronous
Federated Learning [72.78668894576515]
Federated Learning (FL) is a newly emerged decentralized machine learning (ML) framework.
We propose an asynchronous FL framework with periodic aggregation to eliminate the straggler issue in FL systems.
arXiv Detail & Related papers (2021-07-23T18:57:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.