Federated Learning under Distributed Concept Drift
- URL: http://arxiv.org/abs/2206.00799v1
- Date: Wed, 1 Jun 2022 23:55:21 GMT
- Title: Federated Learning under Distributed Concept Drift
- Authors: Ellango Jothimurugesan, Kevin Hsieh, Jianyu Wang, Gauri Joshi, Phillip
B. Gibbons
- Abstract summary: Federated Learning (FL) under distributed concept drift is a largely unexplored area.
We first demonstrate that prior solutions to drift adaptation, with their single global model, are ill-suited to staggered drifts.
We propose two new clustering algorithms for reacting to drifts based on local drift detection and hierarchical clustering.
- Score: 30.069809537266575
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) under distributed concept drift is a largely
unexplored area. Although concept drift is itself a well-studied phenomenon, it
poses particular challenges for FL, because drifts arise staggered in time and
space (across clients). Our work is the first to explicitly study data
heterogeneity in both dimensions. We first demonstrate that prior solutions to
drift adaptation, with their single global model, are ill-suited to staggered
drifts, necessitating multi-model solutions. We identify the problem of drift
adaptation as a time-varying clustering problem, and we propose two new
clustering algorithms for reacting to drifts based on local drift detection and
hierarchical clustering. Empirical evaluation shows that our solutions achieve
significantly higher accuracy than existing baselines, and are comparable to an
idealized algorithm with oracle knowledge of the ground-truth clustering of
clients to concepts at each time step.
Related papers
- Drift Localization using Conformal Predictions [6.543424351779503]
Concept drift poses significant challenges for learning systems and is of central interest for monitoring.<n>In this work, we consider a fundamentally different approach based on conformal predictions.<n>We discuss and show the shortcomings of common approaches and demonstrate the performance of our approach on state-of-the-art image datasets.
arXiv Detail & Related papers (2026-02-23T12:46:50Z) - Classifier Clustering and Feature Alignment for Federated Learning under Distributed Concept Drift [5.566951183982973]
In this work, we focus on real drift, where the conditional distribution $P(Y|X)$ changes.
We propose FedCCFA, a federated learning framework with classifier clustering and feature alignment.
Our results demonstrate that FedCCFA significantly outperforms existing methods under various concept drift settings.
arXiv Detail & Related papers (2024-10-24T07:04:52Z) - A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs [57.35402286842029]
We propose a novel Aligned Dual Dual (A-FedPD) method, which constructs virtual dual align global and local clients.
We provide a comprehensive analysis of the A-FedPD method's efficiency for those protracted unicipated security consensus.
arXiv Detail & Related papers (2024-09-27T17:00:32Z) - Unveiling Group-Specific Distributed Concept Drift: A Fairness Imperative in Federated Learning [4.3310896118860445]
Group-specific concept drift refers to situations where one group experiences concept drift over time while another does not.
Within the framework of federated learning, each client can experience group-specific concept drift independently while still sharing the same underlying concept.
We adapt an existing distributed concept drift adaptation algorithm to tackle group-specific distributed concept drift.
arXiv Detail & Related papers (2024-02-12T11:35:25Z) - A comprehensive analysis of concept drift locality in data streams [3.5897534810405403]
Concept drift must be detected for effective model adaptation to evolving data properties.
We present a novel categorization of concept drift based on its locality and scale.
We conduct a comparative assessment of 9 state-of-the-art drift detectors across diverse difficulties.
arXiv Detail & Related papers (2023-11-10T20:57:43Z) - Every Parameter Matters: Ensuring the Convergence of Federated Learning
with Dynamic Heterogeneous Models Reduction [22.567754688492414]
Cross-device Federated Learning (FL) faces significant challenges where low-end clients that could potentially make unique contributions are excluded from training large models due to their resource bottlenecks.
Recent research efforts have focused on model-heterogeneous FL, by extracting reduced-size models from the global model and applying them to local clients accordingly.
This paper presents a unifying framework for heterogeneous FL algorithms with online model extraction and provides a general convergence analysis for the first time.
arXiv Detail & Related papers (2023-10-12T19:07:58Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Decentralized Personalized Federated Learning for Min-Max Problems [79.61785798152529]
This paper is the first to study PFL for saddle point problems encompassing a broader range of optimization problems.
We propose new algorithms to address this problem and provide a theoretical analysis of the smooth (strongly) convex-(strongly) concave saddle point problems.
Numerical experiments for bilinear problems and neural networks with adversarial noise demonstrate the effectiveness of the proposed methods.
arXiv Detail & Related papers (2021-06-14T10:36:25Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - DriftSurf: A Risk-competitive Learning Algorithm under Concept Drift [12.579800289829963]
When learning from streaming data, a change in the data distribution, also known as concept drift, can render a previously-learned model inaccurate.
We present an adaptive learning algorithm that extends previous drift-detection-based methods by incorporating drift detection into a broader stable-state/reactive-state process.
The algorithm is generic in its base learner and can be applied across a variety of supervised learning problems.
arXiv Detail & Related papers (2020-03-13T23:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.