Asynchronous Federated Learning for Sensor Data with Concept Drift
- URL: http://arxiv.org/abs/2109.00151v1
- Date: Wed, 1 Sep 2021 02:06:42 GMT
- Title: Asynchronous Federated Learning for Sensor Data with Concept Drift
- Authors: Yujing Chen, Zheng Chai, Yue Cheng, Huzefa Rangwala
- Abstract summary: Federated learning (FL) involves multiple distributed devices jointly training a shared model.
Most of previous FL approaches assume that data on devices are fixed and stationary during the training process.
concept drift makes the learning process complicated because of the inconsistency between existing and upcoming data.
We propose a novel approach, FedConD, to detect and deal with the concept drift on local devices.
- Score: 17.390098048134195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) involves multiple distributed devices jointly
training a shared model without any of the participants having to reveal their
local data to a centralized server. Most of previous FL approaches assume that
data on devices are fixed and stationary during the training process. However,
this assumption is unrealistic because these devices usually have varying
sampling rates and different system configurations. In addition, the underlying
distribution of the device data can change dynamically over time, which is
known as concept drift. Concept drift makes the learning process complicated
because of the inconsistency between existing and upcoming data. Traditional
concept drift handling techniques such as chunk based and ensemble
learning-based methods are not suitable in the federated learning frameworks
due to the heterogeneity of local devices. We propose a novel approach,
FedConD, to detect and deal with the concept drift on local devices and
minimize the effect on the performance of models in asynchronous FL. The drift
detection strategy is based on an adaptive mechanism which uses the historical
performance of the local models. The drift adaptation is realized by adjusting
the regularization parameter of objective function on each local device.
Additionally, we design a communication strategy on the server side to select
local updates in a prudent fashion and speed up model convergence. Experimental
evaluations on three evolving data streams and two image datasets show that
\model~detects and handles concept drift, and also reduces the overall
communication cost compared to other baseline methods.
Related papers
- Federated Learning with MMD-based Early Stopping for Adaptive GNSS Interference Classification [4.674584508653125]
Federated learning (FL) enables multiple devices to collaboratively train a global model while maintaining data on local servers.
We propose an FL approach using few-shot learning and aggregation of the model weights on a global server.
An exemplary application of FL is orchestrating machine learning models along highways for interference classification based on snapshots from global navigation satellite system (GNSS) receivers.
arXiv Detail & Related papers (2024-10-21T06:43:04Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - FLARE: Detection and Mitigation of Concept Drift for Federated Learning
based IoT Deployments [2.7776688429637466]
FLARE is a lightweight dual-scheduler FL framework that conditionally transfers training data and deploys models between edge and sensor endpoints.
We show that FLARE can significantly reduce the amount of data exchanged between edge and sensor nodes compared to fixed-interval scheduling methods.
It can successfully detect concept drift reactively with at least a 16x reduction in latency.
arXiv Detail & Related papers (2023-05-15T10:09:07Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Unsupervised Unlearning of Concept Drift with Autoencoders [5.41354952642957]
Concept drift refers to a change in the data distribution affecting the data stream of future samples.
This paper proposes an unsupervised and model-agnostic concept drift adaptation method at the global level.
arXiv Detail & Related papers (2022-11-23T14:52:49Z) - Event-Triggered Decentralized Federated Learning over
Resource-Constrained Edge Devices [12.513477328344255]
Federated learning (FL) is a technique for distributed machine learning (ML)
In traditional FL algorithms, trained models at the edge are periodically sent to a central server for aggregation.
We develop a novel methodology for fully decentralized FL, where devices conduct model aggregation via cooperative consensus formation.
arXiv Detail & Related papers (2022-11-23T00:04:05Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.