STSA: Federated Class-Incremental Learning via Spatial-Temporal Statistics Aggregation
- URL: http://arxiv.org/abs/2506.01327v1
- Date: Mon, 02 Jun 2025 05:14:57 GMT
- Title: STSA: Federated Class-Incremental Learning via Spatial-Temporal Statistics Aggregation
- Authors: Zenghao Guan, Guojun Zhu, Yucan Zhou, Wu Liu, Weiping Wang, Jiebo Luo, Xiaoyan Gu,
- Abstract summary: Federated Class-Incremental Learning (FCIL) enables Class-Incremental Learning from distributed data.<n>We propose a novel approach to aggregate feature statistics both spatially (across clients) and temporally (across stages)<n>We show that our method outperforms state-of-the-art FCIL methods in terms of performance, flexibility, and both communication and efficiency.
- Score: 64.48462746540156
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Class-Incremental Learning (FCIL) enables Class-Incremental Learning (CIL) from distributed data. Existing FCIL methods typically integrate old knowledge preservation into local client training. However, these methods cannot avoid spatial-temporal client drift caused by data heterogeneity and often incur significant computational and communication overhead, limiting practical deployment. To address these challenges simultaneously, we propose a novel approach, Spatial-Temporal Statistics Aggregation (STSA), which provides a unified framework to aggregate feature statistics both spatially (across clients) and temporally (across stages). The aggregated feature statistics are unaffected by data heterogeneity and can be used to update the classifier in closed form at each stage. Additionally, we introduce STSA-E, a communication-efficient variant with theoretical guarantees, achieving similar performance to STSA-E with much lower communication overhead. Extensive experiments on three widely used FCIL datasets, with varying degrees of data heterogeneity, show that our method outperforms state-of-the-art FCIL methods in terms of performance, flexibility, and both communication and computation efficiency.
Related papers
- FedCTTA: A Collaborative Approach to Continual Test-Time Adaptation in Federated Learning [0.956984177686999]
Federated Learning (FL) enables collaborative model training across distributed clients without sharing raw data.<n>FL models often suffer performance degradation due to distribution shifts between training and deployment.<n>We propose Federated Continual Test-Time Adaptation (FedCTTA), a privacy-preserving and computationally efficient framework for federated adaptation.
arXiv Detail & Related papers (2025-05-19T18:29:51Z) - AFCL: Analytic Federated Continual Learning for Spatio-Temporal Invariance of Non-IID Data [45.66391633579935]
Federated Continual Learning (FCL) enables distributed clients to collaboratively train a global model from online task streams.<n>FCL methods face challenges of both spatial data heterogeneity among distributed clients and temporal data heterogeneity across online tasks.<n>We propose a gradient-free method, named Analytic Federated Continual Learning (AFCL), by deriving analytical (i.e., closed-form) solutions from frozen extracted features.
arXiv Detail & Related papers (2025-05-18T05:55:09Z) - Covariances for Free: Exploiting Mean Distributions for Federated Learning with Pre-Trained Models [19.74434265098346]
Recent works have investigated the use of first-order statistics and second-order statistics to aggregate local client data distributions at the server.<n>We propose a training-free method based on an unbiased estimator of class covariance matrices.<n>Our method, which only uses first-order statistics in the form of class means communicated by clients to the server, incurs only a fraction of the communication costs required by methods based on communicating second-order statistics.
arXiv Detail & Related papers (2024-12-18T20:40:14Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Decoupled Federated Learning on Long-Tailed and Non-IID data with
Feature Statistics [20.781607752797445]
We propose a two-stage Decoupled Federated learning framework using Feature Statistics (DFL-FS)
In the first stage, the server estimates the client's class coverage distributions through masked local feature statistics clustering.
In the second stage, DFL-FS employs federated feature regeneration based on global feature statistics to enhance the model's adaptability to long-tailed data distributions.
arXiv Detail & Related papers (2024-03-13T09:24:59Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - CADIS: Handling Cluster-skewed Non-IID Data in Federated Learning with
Clustered Aggregation and Knowledge DIStilled Regularization [3.3711670942444014]
Federated learning enables edge devices to train a global model collaboratively without exposing their data.
We tackle a new type of Non-IID data, called cluster-skewed non-IID, discovered in actual data sets.
We propose an aggregation scheme that guarantees equality between clusters.
arXiv Detail & Related papers (2023-02-21T02:53:37Z) - HFedMS: Heterogeneous Federated Learning with Memorable Data Semantics
in Industrial Metaverse [49.1501082763252]
This paper presents HFEDMS for incorporating practical FL into the emerging Industrial Metaverse.
It reduces data heterogeneity through dynamic grouping and training mode conversion.
Then, it compensates for the forgotten knowledge by fusing compressed historical data semantics.
Experiments have been conducted on the streamed non-i.i.d. FEMNIST dataset using 368 simulated devices.
arXiv Detail & Related papers (2022-11-07T04:33:24Z) - Rethinking Data Heterogeneity in Federated Learning: Introducing a New
Notion and Standard Benchmarks [65.34113135080105]
We show that not only the issue of data heterogeneity in current setups is not necessarily a problem but also in fact it can be beneficial for the FL participants.
Our observations are intuitive.
Our code is available at https://github.com/MMorafah/FL-SC-NIID.
arXiv Detail & Related papers (2022-09-30T17:15:19Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.