An On-Device Federated Learning Approach for Cooperative Model Update
between Edge Devices
- URL: http://arxiv.org/abs/2002.12301v5
- Date: Sun, 27 Jun 2021 14:59:30 GMT
- Title: An On-Device Federated Learning Approach for Cooperative Model Update
between Edge Devices
- Authors: Rei Ito, Mineto Tsukada, Hiroki Matsutani
- Abstract summary: A neural-network based on-device learning approach is recently proposed, so that edge devices train incoming data at runtime to update their model.
In this paper, we focus on OS-ELM to sequentially train a model based on recent samples and combine it with autoencoder for anomaly detection.
We extend it for an on-device federated learning so that edge devices can exchange their trained results and update their model by using those collected from the other edge devices.
- Score: 2.99321624683618
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most edge AI focuses on prediction tasks on resource-limited edge devices
while the training is done at server machines. However, retraining or
customizing a model is required at edge devices as the model is becoming
outdated due to environmental changes over time. To follow such a concept
drift, a neural-network based on-device learning approach is recently proposed,
so that edge devices train incoming data at runtime to update their model. In
this case, since a training is done at distributed edge devices, the issue is
that only a limited amount of training data can be used for each edge device.
To address this issue, one approach is a cooperative learning or federated
learning, where edge devices exchange their trained results and update their
model by using those collected from the other devices. In this paper, as an
on-device learning algorithm, we focus on OS-ELM (Online Sequential Extreme
Learning Machine) to sequentially train a model based on recent samples and
combine it with autoencoder for anomaly detection. We extend it for an
on-device federated learning so that edge devices can exchange their trained
results and update their model by using those collected from the other edge
devices. This cooperative model update is one-shot while it can be repeatedly
applied to synchronize their model. Our approach is evaluated with anomaly
detection tasks generated from a driving dataset of cars, a human activity
dataset, and MNIST dataset. The results demonstrate that the proposed on-device
federated learning can produce a merged model by integrating trained results
from multiple edge devices as accurately as traditional backpropagation based
neural networks and a traditional federated learning approach with lower
computation or communication cost.
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Efficient Asynchronous Federated Learning with Sparsification and
Quantization [55.6801207905772]
Federated Learning (FL) is attracting more and more attention to collaboratively train a machine learning model without transferring raw data.
FL generally exploits a parameter server and a large number of edge devices during the whole process of the model training.
We propose TEASQ-Fed to exploit edge devices to asynchronously participate in the training process by actively applying for tasks.
arXiv Detail & Related papers (2023-12-23T07:47:07Z) - Unsupervised anomalies detection in IIoT edge devices networks using
federated learning [0.0]
Federated learning(FL) as a distributed machine learning approach performs training of a machine learning model on the device that gathered the data itself.
In this paper, we leverage the benefits of FL and implemented Fedavg algorithm on a recent dataset that represent the modern IoT/ IIoT device networks.
We also evaluated some shortcomings of Fedavg such as unfairness that happens during the training when struggling devices do not participate for every stage of training.
arXiv Detail & Related papers (2023-08-23T14:53:38Z) - Stochastic Coded Federated Learning: Theoretical Analysis and Incentive
Mechanism Design [18.675244280002428]
We propose a novel FL framework named coded federated learning (SCFL) that leverages coded computing techniques.
In SCFL, each edge device uploads a privacy-preserving coded dataset to the server, which is generated by adding noise to the projected local dataset.
We show that SCFL learns a better model within the given time and achieves a better privacy-performance tradeoff than the baseline methods.
arXiv Detail & Related papers (2022-11-08T09:58:36Z) - Knowledge Transfer For On-Device Speech Emotion Recognition with Neural
Structured Learning [19.220263739291685]
Speech emotion recognition (SER) has been a popular research topic in human-computer interaction (HCI)
We propose a neural structured learning (NSL) framework through building synthesized graphs.
Our experiments demonstrate that training a lightweight SER model on the target dataset with speech samples and graphs can not only produce small SER models, but also enhance the model performance.
arXiv Detail & Related papers (2022-10-26T18:38:42Z) - Federated Split GANs [12.007429155505767]
We propose an alternative approach to train ML models in user's devices themselves.
We focus on GANs (generative adversarial networks) and leverage their inherent privacy-preserving attribute.
Our system preserves data privacy, keeps a short training time, and yields same accuracy of model training in unconstrained devices.
arXiv Detail & Related papers (2022-07-04T23:53:47Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - Federated Learning with Downlink Device Selection [92.14944020945846]
We study federated edge learning, where a global model is trained collaboratively using privacy-sensitive data at the edge of a wireless network.
A parameter server (PS) keeps track of the global model and shares it with the wireless edge devices for training using their private local data.
We consider device selection based on downlink channels over which the PS shares the global model with the devices.
arXiv Detail & Related papers (2021-07-07T22:42:39Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.