Semi-supervised Federated Learning for Activity Recognition
- URL: http://arxiv.org/abs/2011.00851v3
- Date: Wed, 31 Mar 2021 10:47:40 GMT
- Title: Semi-supervised Federated Learning for Activity Recognition
- Authors: Yuchen Zhao, Hanyang Liu, Honglin Li, Payam Barnaghi, Hamed Haddadi
- Abstract summary: Training deep learning models on in-home IoT sensory data is commonly used to recognise human activities.
Recently, federated learning systems that use edge devices as clients to support local human activity recognition have emerged.
We propose an activity recognition system that uses semi-supervised federated learning.
- Score: 9.720890017788676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training deep learning models on in-home IoT sensory data is commonly used to
recognise human activities. Recently, federated learning systems that use edge
devices as clients to support local human activity recognition have emerged as
a new paradigm to combine local (individual-level) and global (group-level)
models. This approach provides better scalability and generalisability and also
offers better privacy compared with the traditional centralised analysis and
learning models. The assumption behind federated learning, however, relies on
supervised learning on clients. This requires a large volume of labelled data,
which is difficult to collect in uncontrolled IoT environments such as remote
in-home monitoring.
In this paper, we propose an activity recognition system that uses
semi-supervised federated learning, wherein clients conduct unsupervised
learning on autoencoders with unlabelled local data to learn general
representations, and a cloud server conducts supervised learning on an activity
classifier with labelled data. Our experimental results show that using a long
short-term memory autoencoder and a Softmax classifier, the accuracy of our
proposed system is higher than that of both centralised systems and
semi-supervised federated learning using data augmentation. The accuracy is
also comparable to that of supervised federated learning systems. Meanwhile, we
demonstrate that our system can reduce the number of needed labels and the size
of local models, and has faster local activity recognition speed than
supervised federated learning does.
Related papers
- Self-Supervised Learning for User Localization [8.529237718266042]
Machine learning techniques have shown remarkable accuracy in localization tasks.
Their dependency on vast amounts of labeled data, particularly Channel State Information (CSI) and corresponding coordinates, remains a bottleneck.
We propose a pioneering approach that leverages self-supervised pretraining on unlabeled data to boost the performance of supervised learning for user localization based on CSI.
arXiv Detail & Related papers (2024-04-19T21:49:10Z) - Reinforcement Learning Based Multi-modal Feature Fusion Network for
Novel Class Discovery [47.28191501836041]
In this paper, we employ a Reinforcement Learning framework to simulate the cognitive processes of humans.
We also deploy a Member-to-Leader Multi-Agent framework to extract and fuse features from multi-modal information.
We demonstrate the performance of our approach in both the 3D and 2D domains by employing the OS-MN40, OS-MN40-Miss, and Cifar10 datasets.
arXiv Detail & Related papers (2023-08-26T07:55:32Z) - Network Anomaly Detection Using Federated Learning [0.483420384410068]
We introduce a robust and scalable framework that enables efficient network anomaly detection.
We leverage federated learning, in which multiple participants train a global model jointly.
The proposed method performs better than baseline machine learning techniques on the UNSW-NB15 data set.
arXiv Detail & Related papers (2023-03-13T20:16:30Z) - Knowledge-Aware Federated Active Learning with Non-IID Data [75.98707107158175]
We propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget.
The main challenge faced by federated active learning is the mismatch between the active sampling goal of the global model on the server and that of the local clients.
We propose Knowledge-Aware Federated Active Learning (KAFAL), which consists of Knowledge-Specialized Active Sampling (KSAS) and Knowledge-Compensatory Federated Update (KCFU)
arXiv Detail & Related papers (2022-11-24T13:08:43Z) - Evaluation and comparison of federated learning algorithms for Human
Activity Recognition on smartphones [0.5039813366558306]
Federated Learning (FL) has been introduced as a new machine learning paradigm enhancing the use of local devices.
In this paper, we propose a new FL algorithm, termed FedDist, which can modify models during training by identifying dissimilarities between neurons among the clients.
Results have shown the ability of FedDist to adapt to heterogeneous data and the capability of FL to deal with asynchronous situations.
arXiv Detail & Related papers (2022-10-30T18:47:23Z) - Federated Self-Supervised Learning in Heterogeneous Settings: Limits of
a Baseline Approach on HAR [0.5039813366558306]
We show that standard lightweight autoencoder and standard Federated Averaging fail to learn a robust representation for Human Activity Recognition.
These findings advocate for a more intensive research effort in Federated Self Supervised Learning.
arXiv Detail & Related papers (2022-07-17T14:15:45Z) - Federated Stochastic Gradient Descent Begets Self-Induced Momentum [151.4322255230084]
Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems.
We show that running to the gradient descent (SGD) in such a setting can be viewed as adding a momentum-like term to the global aggregation process.
arXiv Detail & Related papers (2022-02-17T02:01:37Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - CosSGD: Nonlinear Quantization for Communication-efficient Federated
Learning [62.65937719264881]
Federated learning facilitates learning across clients without transferring local data on these clients to a central server.
We propose a nonlinear quantization for compressed gradient descent, which can be easily utilized in federated learning.
Our system significantly reduces the communication cost by up to three orders of magnitude, while maintaining convergence and accuracy of the training process.
arXiv Detail & Related papers (2020-12-15T12:20:28Z) - Federated Self-Supervised Learning of Multi-Sensor Representations for
Embedded Intelligence [8.110949636804772]
Smartphones, wearables, and Internet of Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models.
We propose a self-supervised approach termed textitscalogram-signal correspondence learning based on wavelet transform to learn useful representations from unlabeled sensor inputs.
We extensively assess the quality of learned features with our multi-view strategy on diverse public datasets, achieving strong performance in all domains.
arXiv Detail & Related papers (2020-07-25T21:59:17Z) - Decentralised Learning from Independent Multi-Domain Labels for Person
Re-Identification [69.29602103582782]
Deep learning has been successful for many computer vision tasks due to the availability of shared and centralised large-scale training data.
However, increasing awareness of privacy concerns poses new challenges to deep learning, especially for person re-identification (Re-ID)
We propose a novel paradigm called Federated Person Re-Identification (FedReID) to construct a generalisable global model (a central server) by simultaneously learning with multiple privacy-preserved local models (local clients)
This client-server collaborative learning process is iteratively performed under privacy control, enabling FedReID to realise decentralised learning without sharing distributed data nor collecting any
arXiv Detail & Related papers (2020-06-07T13:32:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.