FLAME: Adaptive and Reactive Concept Drift Mitigation for Federated Learning Deployments
- URL: http://arxiv.org/abs/2410.01386v2
- Date: Mon, 7 Oct 2024 14:14:39 GMT
- Title: FLAME: Adaptive and Reactive Concept Drift Mitigation for Federated Learning Deployments
- Authors: Ioannis Mavromatis, Stefano De Feo, Aftab Khan,
- Abstract summary: This paper presents Federated Learning with Adaptive Monitoring and Elimination (FLAME)
FLAME is a novel solution capable of detecting and mitigating concept drift in Federated Learning (FL) Internet of Things (IoT) environments.
- Score: 2.553456266022126
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents Federated Learning with Adaptive Monitoring and Elimination (FLAME), a novel solution capable of detecting and mitigating concept drift in Federated Learning (FL) Internet of Things (IoT) environments. Concept drift poses significant challenges for FL models deployed in dynamic and real-world settings. FLAME leverages an FL architecture, considers a real-world FL pipeline, and proves capable of maintaining model performance and accuracy while addressing bandwidth and privacy constraints. Introducing various features and extensions on previous works, FLAME offers a robust solution to concept drift, significantly reducing computational load and communication overhead. Compared to well-known lightweight mitigation methods, FLAME demonstrates superior performance in maintaining high F1 scores and reducing resource utilisation in large-scale IoT deployments, making it a promising approach for real-world applications.
Related papers
- Online Client Scheduling and Resource Allocation for Efficient Federated Edge Learning [9.451084740123198]
Federated learning (FL) enables edge devices to collaboratively train a machine learning model without sharing their raw data.
However, deploying FL over mobile edge networks with constrained resources such as power, bandwidth, and suffers from high training latency and low model accuracy.
This paper investigates the optimal client scheduling and resource allocation for FL over mobile edge networks under resource constraints and uncertainty.
arXiv Detail & Related papers (2024-09-29T01:56:45Z) - Bridging the Gap Between Foundation Models and Heterogeneous Federated
Learning [9.198799314774437]
Federated learning (FL) offers privacy-preserving decentralized machine learning, optimizing models at edge clients without sharing private data.
Foundation models (FMs) have gained traction in the artificial intelligence (AI) community due to their exceptional performance across various tasks.
We present an adaptive framework for Resource-aware Federated Foundation Models (RaFFM) to address these challenges.
arXiv Detail & Related papers (2023-09-30T04:31:53Z) - Deep Equilibrium Models Meet Federated Learning [71.57324258813675]
This study explores the problem of Federated Learning (FL) by utilizing the Deep Equilibrium (DEQ) models instead of conventional deep learning networks.
We claim that incorporating DEQ models into the federated learning framework naturally addresses several open problems in FL.
To the best of our knowledge, this study is the first to establish a connection between DEQ models and federated learning.
arXiv Detail & Related papers (2023-05-29T22:51:40Z) - Synergies Between Federated Learning and O-RAN: Towards an Elastic Architecture for Multiple Distributed Machine Learning Services [7.057114677579558]
Federated learning (FL) over 5G-and-beyond wireless networks is a popular distributed machine learning (ML) technique.
implementation of FL over 5G-and-beyond wireless networks faces key challenges caused by (i) dynamics of the wireless network conditions and (ii) the coexistence of multiple FL-services in the system.
We first take a closer look into these challenges and unveil nuanced phenomena called over-/under-provisioning of resources and perspective-driven load balancing.
We then take the first steps towards addressing these phenomena by proposing a novel distributed ML architecture called elastic FL (EFL)
arXiv Detail & Related papers (2023-04-14T19:21:42Z) - FS-Real: Towards Real-World Cross-Device Federated Learning [60.91678132132229]
Federated Learning (FL) aims to train high-quality models in collaboration with distributed clients while not uploading their local data.
There is still a considerable gap between the flourishing FL research and real-world scenarios, mainly caused by the characteristics of heterogeneous devices and its scales.
We propose an efficient and scalable prototyping system for real-world cross-device FL, FS-Real.
arXiv Detail & Related papers (2023-03-23T15:37:17Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Reliable Federated Disentangling Network for Non-IID Domain Feature [62.73267904147804]
In this paper, we propose a novel reliable federated disentangling network, termed RFedDis.
To the best of our knowledge, our proposed RFedDis is the first work to develop an FL approach based on evidential uncertainty combined with feature disentangling.
Our proposed RFedDis provides outstanding performance with a high degree of reliability as compared to other state-of-the-art FL approaches.
arXiv Detail & Related papers (2023-01-30T11:46:34Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.