Online Federated Learning via Non-Stationary Detection and Adaptation
amidst Concept Drift
- URL: http://arxiv.org/abs/2211.12578v2
- Date: Sat, 6 May 2023 10:31:26 GMT
- Title: Online Federated Learning via Non-Stationary Detection and Adaptation
amidst Concept Drift
- Authors: Bhargav Ganguly and Vaneet Aggarwal
- Abstract summary: Federated Learning (FL) is an emerging domain in the broader context of artificial intelligence research.
Existing literature in FL mostly assumes stationary data generation processes.
We introduce a multiscale algorithmic framework which combines theoretical guarantees of textitFedAvg and textitFedOMD algorithms in near stationary settings.
- Score: 39.12903814606534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is an emerging domain in the broader context of
artificial intelligence research. Methodologies pertaining to FL assume
distributed model training, consisting of a collection of clients and a server,
with the main goal of achieving optimal global model with restrictions on data
sharing due to privacy concerns. It is worth highlighting that the diverse
existing literature in FL mostly assume stationary data generation processes;
such an assumption is unrealistic in real-world conditions where concept drift
occurs due to, for instance, seasonal or period observations, faults in sensor
measurements. In this paper, we introduce a multiscale algorithmic framework
which combines theoretical guarantees of \textit{FedAvg} and \textit{FedOMD}
algorithms in near stationary settings with a non-stationary detection and
adaptation technique to ameliorate FL generalization performance in the
presence of concept drifts. We present a multi-scale algorithmic framework
leading to $\Tilde{\mathcal{O}} ( \min \{ \sqrt{LT} ,
\Delta^{\frac{1}{3}}T^{\frac{2}{3}} + \sqrt{T} \})$ \textit{dynamic regret} for
$T$ rounds with an underlying general convex loss function, where $L$ is the
number of times non-stationary drifts occurred and $\Delta$ is the cumulative
magnitude of drift experienced within $T$ rounds.
Related papers
- Towards Resource-Efficient Federated Learning in Industrial IoT for Multivariate Time Series Analysis [50.18156030818883]
Anomaly and missing data constitute a thorny problem in industrial applications.
Deep learning enabled anomaly detection has emerged as a critical direction.
The data collected in edge devices contain user privacy.
arXiv Detail & Related papers (2024-11-06T15:38:31Z) - FedHPL: Efficient Heterogeneous Federated Learning with Prompt Tuning and Logit Distillation [32.305134875959226]
Federated learning (FL) is a privacy-preserving paradigm that enables distributed clients to collaboratively train models with a central server.
We propose FedHPL, a parameter-efficient unified $textbfFed$erated learning framework for $textbfH$eterogeneous settings.
We show that our framework outperforms state-of-the-art FL approaches, with less overhead and training rounds.
arXiv Detail & Related papers (2024-05-27T15:25:32Z) - On the Convergence of Federated Averaging under Partial Participation for Over-parameterized Neural Networks [13.2844023993979]
Federated learning (FL) is a widely employed distributed paradigm for collaboratively machine learning models from multiple clients without sharing local data.
In this paper, we show that FedAvg converges to a global minimum at a global rate at a global focus.
arXiv Detail & Related papers (2023-10-09T07:56:56Z) - DFedADMM: Dual Constraints Controlled Model Inconsistency for
Decentralized Federated Learning [52.83811558753284]
Decentralized learning (DFL) discards the central server and establishes a decentralized communication network.
Existing DFL methods still suffer from two major challenges: local inconsistency and local overfitting.
arXiv Detail & Related papers (2023-08-16T11:22:36Z) - GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization [52.55628139825667]
Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
arXiv Detail & Related papers (2023-08-09T04:34:21Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - $\texttt{FedBC}$: Calibrating Global and Local Models via Federated
Learning Beyond Consensus [66.62731854746856]
In federated learning (FL), the objective of collaboratively learning a global model through aggregation of model updates across devices tends to oppose the goal of personalization via local information.
In this work, we calibrate this tradeoff in a quantitative manner through a multi-criterion-based optimization.
We demonstrate that $texttFedBC$ balances the global and local model test accuracy metrics across a suite datasets.
arXiv Detail & Related papers (2022-06-22T02:42:04Z) - On the Convergence of Heterogeneous Federated Learning with Arbitrary
Adaptive Online Model Pruning [15.300983585090794]
We present a unifying framework for heterogeneous FL algorithms with em arbitrary adaptive online model pruning.
In particular, we prove that under certain sufficient conditions, these algorithms converge to a stationary point of standard FL for general smooth cost functions.
We illuminate two key factors impacting convergence: pruning-induced noise and minimum coverage index.
arXiv Detail & Related papers (2022-01-27T20:43:38Z) - FedLGA: Towards System-Heterogeneity of Federated Learning via Local
Gradient Approximation [21.63719641718363]
We formalize the system-heterogeneous FL problem and propose a new algorithm, called FedLGA, which addresses this problem by bridging the divergence local model updates via epoch approximation.
The results of comprehensive experiments on multiple datasets show that FedLGA outperforms current FL benchmarks against the system-heterogeneity.
arXiv Detail & Related papers (2021-12-22T16:05:09Z) - Low-Rank Autoregressive Tensor Completion for Spatiotemporal Traffic
Data Imputation [4.9831085918734805]
Missing data imputation has been a long-standing research topic and critical application for real-world intelligent transportation systems.
We propose a low-rank autoregressive tensor completion (LATC) framework by introducing textittemporal variation as a new regularization term.
We conduct extensive numerical experiments on several real-world traffic data sets, and our results demonstrate the effectiveness of LATC in diverse missing scenarios.
arXiv Detail & Related papers (2021-04-30T12:00:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.