FLARE: Detection and Mitigation of Concept Drift for Federated Learning
based IoT Deployments
- URL: http://arxiv.org/abs/2305.08504v1
- Date: Mon, 15 May 2023 10:09:07 GMT
- Title: FLARE: Detection and Mitigation of Concept Drift for Federated Learning
based IoT Deployments
- Authors: Theo Chow and Usman Raza and Ioannis Mavromatis and Aftab Khan
- Abstract summary: FLARE is a lightweight dual-scheduler FL framework that conditionally transfers training data and deploys models between edge and sensor endpoints.
We show that FLARE can significantly reduce the amount of data exchanged between edge and sensor nodes compared to fixed-interval scheduling methods.
It can successfully detect concept drift reactively with at least a 16x reduction in latency.
- Score: 2.7776688429637466
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intelligent, large-scale IoT ecosystems have become possible due to recent
advancements in sensing technologies, distributed learning, and low-power
inference in embedded devices. In traditional cloud-centric approaches, raw
data is transmitted to a central server for training and inference purposes. On
the other hand, Federated Learning migrates both tasks closer to the edge nodes
and endpoints. This allows for a significant reduction in data exchange while
preserving the privacy of users. Trained models, though, may under-perform in
dynamic environments due to changes in the data distribution, affecting the
model's ability to infer accurately; this is referred to as concept drift. Such
drift may also be adversarial in nature. Therefore, it is of paramount
importance to detect such behaviours promptly. In order to simultaneously
reduce communication traffic and maintain the integrity of inference models, we
introduce FLARE, a novel lightweight dual-scheduler FL framework that
conditionally transfers training data, and deploys models between edge and
sensor endpoints based on observing the model's training behaviour and
inference statistics, respectively. We show that FLARE can significantly reduce
the amount of data exchanged between edge and sensor nodes compared to
fixed-interval scheduling methods (over 5x reduction), is easily scalable to
larger systems, and can successfully detect concept drift reactively with at
least a 16x reduction in latency.
Related papers
- Towards Resource-Efficient Federated Learning in Industrial IoT for Multivariate Time Series Analysis [50.18156030818883]
Anomaly and missing data constitute a thorny problem in industrial applications.
Deep learning enabled anomaly detection has emerged as a critical direction.
The data collected in edge devices contain user privacy.
arXiv Detail & Related papers (2024-11-06T15:38:31Z) - Smart Information Exchange for Unsupervised Federated Learning via
Reinforcement Learning [11.819765040106185]
We propose an approach to create an optimal graph for data transfer using Reinforcement Learning.
The goal is to form links that will provide the most benefit considering the environment's constraints.
Numerical analysis shows the advantages in terms of convergence speed and straggler resilience of the proposed method.
arXiv Detail & Related papers (2024-02-15T00:14:41Z) - Accelerating Scalable Graph Neural Network Inference with Node-Adaptive
Propagation [80.227864832092]
Graph neural networks (GNNs) have exhibited exceptional efficacy in a diverse array of applications.
The sheer size of large-scale graphs presents a significant challenge to real-time inference with GNNs.
We propose an online propagation framework and two novel node-adaptive propagation methods.
arXiv Detail & Related papers (2023-10-17T05:03:00Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - EdgeFD: An Edge-Friendly Drift-Aware Fault Diagnosis System for
Industrial IoT [0.0]
We propose the Drift-Aware Weight Consolidation (DAWC) to mitigate the challenges posed by frequent data drift in the industrial Internet of Things (IIoT)
DAWC efficiently manages multiple data drift scenarios, minimizing the need for constant model fine-tuning on edge devices.
We have also developed a comprehensive diagnosis and visualization platform.
arXiv Detail & Related papers (2023-10-07T06:48:07Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Efficient Graph Neural Network Inference at Large Scale [54.89457550773165]
Graph neural networks (GNNs) have demonstrated excellent performance in a wide range of applications.
Existing scalable GNNs leverage linear propagation to preprocess the features and accelerate the training and inference procedure.
We propose a novel adaptive propagation order approach that generates the personalized propagation order for each node based on its topological information.
arXiv Detail & Related papers (2022-11-01T14:38:18Z) - Spatio-Temporal Federated Learning for Massive Wireless Edge Networks [23.389249751372393]
An edge server and numerous mobile devices (clients) jointly learn a global model without transporting huge amount of data collected by the mobile devices to the edge server.
The proposed FL approach exploits spatial and temporal correlations between learning updates from different mobile devices scheduled to join STFL in various trainings.
An analytical framework of STFL is proposed and employed to study the learning capability of STFL via its convergence performance.
arXiv Detail & Related papers (2021-10-27T16:46:45Z) - Asynchronous Federated Learning for Sensor Data with Concept Drift [17.390098048134195]
Federated learning (FL) involves multiple distributed devices jointly training a shared model.
Most of previous FL approaches assume that data on devices are fixed and stationary during the training process.
concept drift makes the learning process complicated because of the inconsistency between existing and upcoming data.
We propose a novel approach, FedConD, to detect and deal with the concept drift on local devices.
arXiv Detail & Related papers (2021-09-01T02:06:42Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.