Personalized Semi-Supervised Federated Learning for Human Activity
Recognition
- URL: http://arxiv.org/abs/2104.08094v2
- Date: Mon, 19 Apr 2021 12:56:03 GMT
- Title: Personalized Semi-Supervised Federated Learning for Human Activity
Recognition
- Authors: Claudio Bettini, Gabriele Civitarese, Riccardo Presotto
- Abstract summary: We propose FedHAR: a novel hybrid method for human activities recognition.
FedHAR combines semi-supervised and federated learning.
We show that FedHAR reaches recognition rates and personalization capabilities similar to state-of-the-art FL supervised approaches.
- Score: 1.9014535120129343
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The most effective data-driven methods for human activities recognition (HAR)
are based on supervised learning applied to the continuous stream of sensors
data. However, these methods perform well on restricted sets of activities in
domains for which there is a fully labeled dataset. It is still a challenge to
cope with the intra- and inter-variability of activity execution among
different subjects in large scale real world deployment. Semi-supervised
learning approaches for HAR have been proposed to address the challenge of
acquiring the large amount of labeled data that is necessary in realistic
settings. However, their centralised architecture incurs in the scalability and
privacy problems when the process involves a large number of users. Federated
Learning (FL) is a promising paradigm to address these problems. However, the
FL methods that have been proposed for HAR assume that the participating users
can always obtain labels to train their local models. In this work, we propose
FedHAR: a novel hybrid method for HAR that combines semi-supervised and
federated learning. Indeed, FedHAR combines active learning and label
propagation to semi-automatically annotate the local streams of unlabeled
sensor data, and it relies on FL to build a global activity model in a scalable
and privacy-aware fashion. FedHAR also includes a transfer learning strategy to
personalize the global model on each user. We evaluated our method on two
public datasets, showing that FedHAR reaches recognition rates and
personalization capabilities similar to state-of-the-art FL supervised
approaches. As a major advantage, FedHAR only requires a very limited number of
annotated data to populate a pre-trained model and a small number of active
learning questions that quickly decrease while using the system, leading to an
effective and scalable solution for the data scarcity problem of HAR.
Related papers
- CDFL: Efficient Federated Human Activity Recognition using Contrastive Learning and Deep Clustering [12.472038137777474]
Human Activity Recognition (HAR) is vital for the automation and intelligent identification of human actions through data from diverse sensors.
Traditional machine learning approaches by aggregating data on a central server and centralized processing are memory-intensive and raise privacy concerns.
This work proposes CDFL, an efficient federated learning framework for image-based HAR.
arXiv Detail & Related papers (2024-07-17T03:17:53Z) - Federated Unlearning for Human Activity Recognition [11.287645073129108]
We propose a lightweight machine unlearning method for refining the FL HAR model by selectively removing a portion of a client's training data.
Our method achieves unlearning accuracy comparable to textitretraining methods, resulting in speedups ranging from hundreds to thousands.
arXiv Detail & Related papers (2024-01-17T15:51:36Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Cross-Domain HAR: Few Shot Transfer Learning for Human Activity
Recognition [0.2944538605197902]
We present an approach for economic use of publicly available labeled HAR datasets for effective transfer learning.
We introduce a novel transfer learning framework, Cross-Domain HAR, which follows the teacher-student self-training paradigm.
We demonstrate the effectiveness of our approach for practically relevant few shot activity recognition scenarios.
arXiv Detail & Related papers (2023-10-22T19:13:25Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - Meta-HAR: Federated Representation Learning for Human Activity
Recognition [21.749861229805727]
Human activity recognition (HAR) based on mobile sensors plays an important role in ubiquitous computing.
We propose Meta-HAR, a federated representation learning framework, in which a signal embedding network is meta-learned in a federated manner.
In order to boost the representation ability of the embedding network, we treat the HAR problem at each user as a different task and train the shared embedding network through a Model-Agnostic Meta-learning framework.
arXiv Detail & Related papers (2021-05-31T11:04:39Z) - Privacy-Preserving Self-Taught Federated Learning for Heterogeneous Data [6.545317180430584]
Federated learning (FL) was proposed to enable joint training of a deep learning model using the local data in each party without revealing the data to others.
In this work, we propose an FL method called self-taught federated learning to address the aforementioned issues.
In this method, only latent variables are transmitted to other parties for model training, while privacy is preserved by storing the data and parameters of activations, weights, and biases locally.
arXiv Detail & Related papers (2021-02-11T08:07:51Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.