Meta-HAR: Federated Representation Learning for Human Activity
Recognition
- URL: http://arxiv.org/abs/2106.00615v1
- Date: Mon, 31 May 2021 11:04:39 GMT
- Title: Meta-HAR: Federated Representation Learning for Human Activity
Recognition
- Authors: Chenglin Li, Di Niu, Bei Jiang, Xiao Zuo and Jianming Yang
- Abstract summary: Human activity recognition (HAR) based on mobile sensors plays an important role in ubiquitous computing.
We propose Meta-HAR, a federated representation learning framework, in which a signal embedding network is meta-learned in a federated manner.
In order to boost the representation ability of the embedding network, we treat the HAR problem at each user as a different task and train the shared embedding network through a Model-Agnostic Meta-learning framework.
- Score: 21.749861229805727
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Human activity recognition (HAR) based on mobile sensors plays an important
role in ubiquitous computing. However, the rise of data regulatory constraints
precludes collecting private and labeled signal data from personal devices at
scale. Federated learning has emerged as a decentralized alternative solution
to model training, which iteratively aggregates locally updated models into a
shared global model, therefore being able to leverage decentralized, private
data without central collection. However, the effectiveness of federated
learning for HAR is affected by the fact that each user has different activity
types and even a different signal distribution for the same activity type.
Furthermore, it is uncertain if a single global model trained can generalize
well to individual users or new users with heterogeneous data. In this paper,
we propose Meta-HAR, a federated representation learning framework, in which a
signal embedding network is meta-learned in a federated manner, while the
learned signal representations are further fed into a personalized
classification network at each user for activity prediction. In order to boost
the representation ability of the embedding network, we treat the HAR problem
at each user as a different task and train the shared embedding network through
a Model-Agnostic Meta-learning framework, such that the embedding network can
generalize to any individual user. Personalization is further achieved on top
of the robustly learned representations in an adaptation procedure. We
conducted extensive experiments based on two publicly available HAR datasets as
well as a newly created HAR dataset. Results verify that Meta-HAR is effective
at maintaining high test accuracies for individual users, including new users,
and significantly outperforms several baselines, including Federated Averaging,
Reptile and even centralized learning in certain cases.
Related papers
- Personalized Federated Learning with Feature Alignment and Classifier
Collaboration [13.320381377599245]
Data heterogeneity is one of the most challenging issues in federated learning.
One such approach in deep neural networks based tasks is employing a shared feature representation and learning a customized classifier head for each client.
In this work, we conduct explicit local-global feature alignment by leveraging global semantic knowledge for learning a better representation.
arXiv Detail & Related papers (2023-06-20T19:58:58Z) - Distributed Learning over Networks with Graph-Attention-Based
Personalization [49.90052709285814]
We propose a graph-based personalized algorithm (GATTA) for distributed deep learning.
In particular, the personalized model in each agent is composed of a global part and a node-specific part.
By treating each agent as one node in a graph the node-specific parameters as its features, the benefits of the graph attention mechanism can be inherited.
arXiv Detail & Related papers (2023-05-22T13:48:30Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Personalized Semi-Supervised Federated Learning for Human Activity
Recognition [1.9014535120129343]
We propose FedHAR: a novel hybrid method for human activities recognition.
FedHAR combines semi-supervised and federated learning.
We show that FedHAR reaches recognition rates and personalization capabilities similar to state-of-the-art FL supervised approaches.
arXiv Detail & Related papers (2021-04-15T10:24:18Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Adaptive Prototypical Networks with Label Words and Joint Representation
Learning for Few-Shot Relation Classification [17.237331828747006]
This work focuses on few-shot relation classification (FSRC)
We propose an adaptive mixture mechanism to add label words to the representation of the class prototype.
Experiments have been conducted on FewRel under different few-shot (FS) settings.
arXiv Detail & Related papers (2021-01-10T11:25:42Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z) - Mining Implicit Entity Preference from User-Item Interaction Data for
Knowledge Graph Completion via Adversarial Learning [82.46332224556257]
We propose a novel adversarial learning approach by leveraging user interaction data for the Knowledge Graph Completion task.
Our generator is isolated from user interaction data, and serves to improve the performance of the discriminator.
To discover implicit entity preference of users, we design an elaborate collaborative learning algorithms based on graph neural networks.
arXiv Detail & Related papers (2020-03-28T05:47:33Z) - Personalized Federated Learning: A Meta-Learning Approach [28.281166755509886]
In Federated Learning, we aim to train models across multiple computing units (users)
In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data.
arXiv Detail & Related papers (2020-02-19T01:08:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.