Federated Action Recognition on Heterogeneous Embedded Devices
- URL: http://arxiv.org/abs/2107.12147v1
- Date: Sun, 18 Jul 2021 02:33:24 GMT
- Title: Federated Action Recognition on Heterogeneous Embedded Devices
- Authors: Pranjal Jain, Shreyas Goenka, Saurabh Bagchi, Biplab Banerjee, Somali
Chaterji
- Abstract summary: In this work, we enable clients with limited computing power to perform action recognition, a computationally heavy task.
We first perform model compression at the central server through knowledge distillation on a large dataset.
The fine-tuning is required because limited data present in smaller datasets is not adequate for action recognition models to learn complextemporal features.
- Score: 16.88104153104136
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated learning allows a large number of devices to jointly learn a model
without sharing data. In this work, we enable clients with limited computing
power to perform action recognition, a computationally heavy task. We first
perform model compression at the central server through knowledge distillation
on a large dataset. This allows the model to learn complex features and serves
as an initialization for model fine-tuning. The fine-tuning is required because
the limited data present in smaller datasets is not adequate for action
recognition models to learn complex spatio-temporal features. Because the
clients present are often heterogeneous in their computing resources, we use an
asynchronous federated optimization and we further show a convergence bound. We
compare our approach to two baseline approaches: fine-tuning at the central
server (no clients) and fine-tuning using (heterogeneous) clients using
synchronous federated averaging. We empirically show on a testbed of
heterogeneous embedded devices that we can perform action recognition with
comparable accuracy to the two baselines above, while our asynchronous learning
strategy reduces the training time by 40%, relative to synchronous learning.
Related papers
- FedAST: Federated Asynchronous Simultaneous Training [27.492821176616815]
Federated Learning (FL) enables devices or clients to collaboratively train machine learning (ML) models without sharing their private data.
Much of the existing work in FL focuses on efficiently learning a model for a single task.
In this paper, we propose simultaneous training of multiple FL models using a common set of datasets.
arXiv Detail & Related papers (2024-06-01T05:14:20Z) - Federating Dynamic Models using Early-Exit Architectures for Automatic Speech Recognition on Heterogeneous Clients [12.008071873475169]
Federated learning is a technique that collaboratively learns a shared prediction model while keeping the data local on different clients.
We propose using dynamical architectures which, employing early-exit solutions, can adapt their processing depending on the input and on the operation conditions.
This solution falls in the realm of partial training methods and brings two benefits: a single model is used on a variety of devices; federating the models after local training is straightforward.
arXiv Detail & Related papers (2024-05-27T17:32:37Z) - Federated Learning based on Pruning and Recovery [0.0]
This framework integrates asynchronous learning algorithms and pruning techniques.
It addresses the inefficiencies of traditional federated learning algorithms in scenarios involving heterogeneous devices.
It also tackles the staleness issue and inadequate training of certain clients in asynchronous algorithms.
arXiv Detail & Related papers (2024-03-16T14:35:03Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Structured Cooperative Learning with Graphical Model Priors [98.53322192624594]
We study how to train personalized models for different tasks on decentralized devices with limited local data.
We propose "Structured Cooperative Learning (SCooL)", in which a cooperation graph across devices is generated by a graphical model.
We evaluate SCooL and compare it with existing decentralized learning methods on an extensive set of benchmarks.
arXiv Detail & Related papers (2023-06-16T02:41:31Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - A Personalized Federated Learning Algorithm: an Application in Anomaly
Detection [0.6700873164609007]
Federated Learning (FL) has recently emerged as a promising method to overcome data privacy and transmission issues.
In FL, datasets collected from different devices or sensors are used to train local models (clients) each of which shares its learning with a centralized model (server)
This paper proposes a novel Personalized FedAvg (PC-FedAvg) which aims to control weights communication and aggregation augmented with a tailored learning algorithm to personalize the resulting models at each client.
arXiv Detail & Related papers (2021-11-04T04:57:11Z) - FedKD: Communication Efficient Federated Learning via Knowledge
Distillation [56.886414139084216]
Federated learning is widely used to learn intelligent models from decentralized data.
In federated learning, clients need to communicate their local model updates in each iteration of model learning.
We propose a communication efficient federated learning method based on knowledge distillation.
arXiv Detail & Related papers (2021-08-30T15:39:54Z) - Real-time End-to-End Federated Learning: An Automotive Case Study [16.79939549201032]
We introduce an approach to real-time end-to-end Federated Learning combined with a novel asynchronous model aggregation protocol.
Our results show that asynchronous Federated Learning can significantly improve the prediction performance of local edge models and reach the same accuracy level as the centralized machine learning method.
arXiv Detail & Related papers (2021-03-22T14:16:16Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.