FedStack: Personalized activity monitoring using stacked federated
learning
- URL: http://arxiv.org/abs/2209.13080v1
- Date: Tue, 27 Sep 2022 00:12:44 GMT
- Title: FedStack: Personalized activity monitoring using stacked federated
learning
- Authors: Thanveer Shaik, Xiaohui Tao, Niall Higgins, Raj Gururajan, Yuefeng Li,
Xujuan Zhou, U Rajendra Acharya
- Abstract summary: Federated learning is a relatively new AI technique designed to enhance data privacy.
Traditional federated learning requires identical architectural models to be trained across the local clients and global servers.
This work offers a protected privacy system for hospitalized in-patients in a decentralized approach.
- Score: 12.792461572028449
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in remote patient monitoring (RPM) systems can recognize
various human activities to measure vital signs, including subtle motions from
superficial vessels. There is a growing interest in applying artificial
intelligence (AI) to this area of healthcare by addressing known limitations
and challenges such as predicting and classifying vital signs and physical
movements, which are considered crucial tasks. Federated learning is a
relatively new AI technique designed to enhance data privacy by decentralizing
traditional machine learning modeling. However, traditional federated learning
requires identical architectural models to be trained across the local clients
and global servers. This limits global model architecture due to the lack of
local models heterogeneity. To overcome this, a novel federated learning
architecture, FedStack, which supports ensembling heterogeneous architectural
client models was proposed in this study. This work offers a protected privacy
system for hospitalized in-patients in a decentralized approach and identifies
optimum sensor placement. The proposed architecture was applied to a mobile
health sensor benchmark dataset from 10 different subjects to classify 12
routine activities. Three AI models, ANN, CNN, and Bi-LSTM were trained on
individual subject data. The federated learning architecture was applied to
these models to build local and global models capable of state of the art
performances. The local CNN model outperformed ANN and Bi-LSTM models on each
subject data. Our proposed work has demonstrated better performance for
heterogeneous stacking of the local models compared to homogeneous stacking.
This work sets the stage to build an enhanced RPM system that incorporates
client privacy to assist with clinical observations for patients in an acute
mental health facility and ultimately help to prevent unexpected death.
Related papers
- A Survey on Generative Recommendation: Data, Model, and Tasks [55.36322811257545]
generative recommendation reconceptualizes recommendation as a generation task rather than discriminative scoring.<n>This survey provides a comprehensive examination through a unified tripartite framework spanning data, model, and task dimensions.<n>We identify five key advantages: world knowledge integration, natural language understanding, reasoning capabilities, scaling laws, and creative generation.
arXiv Detail & Related papers (2025-10-31T04:02:58Z) - Integrating Genomics into Multimodal EHR Foundation Models [56.31910745104141]
This paper introduces an innovative EHR foundation model that integrates Polygenic Risk Scores (PRS) as a foundational data modality.<n>The framework aims to learn complex relationships between clinical data and genetic predispositions.<n>This approach is pivotal for unlocking new insights into disease prediction, proactive health management, risk stratification, and personalized treatment strategies.
arXiv Detail & Related papers (2025-10-24T15:56:40Z) - UNIFORM: Unifying Knowledge from Large-scale and Diverse Pre-trained Models [62.76435672183968]
We introduce a novel framework, namely UNIFORM, for knowledge transfer from a diverse set of off-the-shelf models into one student model.<n>We propose a dedicated voting mechanism to capture the consensus of knowledge both at the logit level and at the feature level.<n>Experiments demonstrate that UNIFORM effectively enhances unsupervised object recognition performance compared to strong knowledge transfer baselines.
arXiv Detail & Related papers (2025-08-27T00:56:11Z) - Enhancing Federated Learning Through Secure Cluster-Weighted Client Aggregation [4.869042695112397]
Federated learning (FL) has emerged as a promising paradigm in machine learning.
In FL, a global model is trained iteratively on local datasets residing on individual devices.
This paper introduces a novel FL framework, ClusterGuardFL, that employs dissimilarity scores, k-means clustering, and reconciliation confidence scores to dynamically assign weights to client updates.
arXiv Detail & Related papers (2025-03-29T04:29:24Z) - UniSTD: Towards Unified Spatio-Temporal Learning across Diverse Disciplines [64.84631333071728]
We introduce bfUnistage, a unified Transformer-based framework fortemporal modeling.
Our work demonstrates that a task-specific vision-text can build a generalizable model fortemporal learning.
We also introduce a temporal module to incorporate temporal dynamics explicitly.
arXiv Detail & Related papers (2025-03-26T17:33:23Z) - FedSKD: Aggregation-free Model-heterogeneous Federated Learning using Multi-dimensional Similarity Knowledge Distillation [7.944298319589845]
Federated learning (FL) enables privacy-preserving collaborative model training without direct data sharing.
Model-heterogeneous FL (MHFL) allows clients to train personalized models with heterogeneous architectures tailored to their computational resources and application-specific needs.
While peer-to-peer (P2P) FL removes server dependence, it suffers from model drift and knowledge dilution, limiting its effectiveness in heterogeneous settings.
We propose FedSKD, a novel MHFL framework that facilitates direct knowledge exchange through round-robin model circulation.
arXiv Detail & Related papers (2025-03-23T05:33:10Z) - FEDKIM: Adaptive Federated Knowledge Injection into Medical Foundation Models [54.09244105445476]
This study introduces a novel knowledge injection approach, FedKIM, to scale the medical foundation model within a federated learning framework.
FedKIM leverages lightweight local models to extract healthcare knowledge from private data and integrates this knowledge into a centralized foundation model.
Our experiments across twelve tasks in seven modalities demonstrate the effectiveness of FedKIM in various settings.
arXiv Detail & Related papers (2024-08-17T15:42:29Z) - L-SFAN: Lightweight Spatially-focused Attention Network for Pain Behavior Detection [44.016805074560295]
Chronic Low Back Pain (CLBP) afflicts millions globally, significantly impacting individuals' well-being and imposing economic burdens on healthcare systems.
While artificial intelligence (AI) and deep learning offer promising avenues for analyzing pain-related behaviors to improve rehabilitation strategies, current models, including convolutional neural networks (CNNs), have limitations.
We introduce hbox EmoL-SFAN, a lightweight CNN architecture incorporating 2D filters designed to capture the spatial-temporal interplay of data from motion capture and surface electromyography sensors.
arXiv Detail & Related papers (2024-06-07T12:01:37Z) - Brain Storm Optimization Based Swarm Learning for Diabetic Retinopathy Image Classification [5.440545944342685]
This paper integrates the brain storm optimization algorithm into the swarm learning framework, named BSO-SL.
The proposed method has been validated on a real-world diabetic retinopathy image classification dataset.
arXiv Detail & Related papers (2024-04-24T01:37:20Z) - OpenMEDLab: An Open-source Platform for Multi-modality Foundation Models
in Medicine [55.29668193415034]
We present OpenMEDLab, an open-source platform for multi-modality foundation models.
It encapsulates solutions of pioneering attempts in prompting and fine-tuning large language and vision models for frontline clinical and bioinformatic applications.
It opens access to a group of pre-trained foundation models for various medical image modalities, clinical text, protein engineering, etc.
arXiv Detail & Related papers (2024-02-28T03:51:02Z) - Automated Fusion of Multimodal Electronic Health Records for Better
Medical Predictions [48.0590120095748]
We propose a novel neural architecture search (NAS) framework named AutoFM, which can automatically search for the optimal model architectures for encoding diverse input modalities and fusion strategies.
We conduct thorough experiments on real-world multi-modal EHR data and prediction tasks, and the results demonstrate that our framework achieves significant performance improvement over existing state-of-the-art methods.
arXiv Detail & Related papers (2024-01-20T15:14:14Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Towards Personalized Federated Learning via Heterogeneous Model
Reassembly [84.44268421053043]
pFedHR is a framework that leverages heterogeneous model reassembly to achieve personalized federated learning.
pFedHR dynamically generates diverse personalized models in an automated manner.
arXiv Detail & Related papers (2023-08-16T19:36:01Z) - SPIDER: Searching Personalized Neural Architecture for Federated
Learning [17.61748275091843]
Federated learning (FL) assists machine learning when data cannot be shared with a centralized server due to privacy and regulatory restrictions.
Recent advancements in FL use predefined architecture-based learning for all the clients.
We introduce SPIDER, an algorithmic framework that aims to Search Personalized neural architecture for federated learning.
arXiv Detail & Related papers (2021-12-27T23:42:15Z) - A Personalized Federated Learning Algorithm: an Application in Anomaly
Detection [0.6700873164609007]
Federated Learning (FL) has recently emerged as a promising method to overcome data privacy and transmission issues.
In FL, datasets collected from different devices or sensors are used to train local models (clients) each of which shares its learning with a centralized model (server)
This paper proposes a novel Personalized FedAvg (PC-FedAvg) which aims to control weights communication and aggregation augmented with a tailored learning algorithm to personalize the resulting models at each client.
arXiv Detail & Related papers (2021-11-04T04:57:11Z) - Multi-Branch Deep Radial Basis Function Networks for Facial Emotion
Recognition [80.35852245488043]
We propose a CNN based architecture enhanced with multiple branches formed by radial basis function (RBF) units.
RBF units capture local patterns shared by similar instances using an intermediate representation.
We show it is the incorporation of local information what makes the proposed model competitive.
arXiv Detail & Related papers (2021-09-07T21:05:56Z) - Multi-site fMRI Analysis Using Privacy-preserving Federated Learning and
Domain Adaptation: ABIDE Results [13.615292855384729]
To train a high-quality deep learning model, the aggregation of a significant amount of patient information is required.
Due to the need to protect the privacy of patient data, it is hard to assemble a central database from multiple institutions.
Federated learning allows for population-level models to be trained without centralizing entities' data.
arXiv Detail & Related papers (2020-01-16T04:49:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.