MELON: Multimodal Mixture-of-Experts with Spectral-Temporal Fusion for Long-Term Mobility Estimation in Critical Care
- URL: http://arxiv.org/abs/2503.11695v1
- Date: Mon, 10 Mar 2025 19:47:46 GMT
- Title: MELON: Multimodal Mixture-of-Experts with Spectral-Temporal Fusion for Long-Term Mobility Estimation in Critical Care
- Authors: Jiaqing Zhang, Miguel Contreras, Jessica Sena, Andrea Davidson, Yuanfang Ren, Ziyuan Guan, Tezcan Ozrazgat-Baslanti, Tyler J. Loftus, Subhash Nerella, Azra Bihorac, Parisa Rashidi,
- Abstract summary: We introduce MELON, a novel framework designed to predict 12-hour mobility status in the critical care setting.<n>We trained and evaluated the MELON model on the multimodal dataset of 126 patients recruited from nine Intensive Care Units at the University of Florida Health Shands Hospital main campus in Gainesville, Florida.<n>Results showed that MELON outperforms conventional approaches for 12-hour mobility status estimation.
- Score: 1.5237145555729716
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Patient mobility monitoring in intensive care is critical for ensuring timely interventions and improving clinical outcomes. While accelerometry-based sensor data are widely adopted in training artificial intelligence models to estimate patient mobility, existing approaches face two key limitations highlighted in clinical practice: (1) modeling the long-term accelerometer data is challenging due to the high dimensionality, variability, and noise, and (2) the absence of efficient and robust methods for long-term mobility assessment. To overcome these challenges, we introduce MELON, a novel multimodal framework designed to predict 12-hour mobility status in the critical care setting. MELON leverages the power of a dual-branch network architecture, combining the strengths of spectrogram-based visual representations and sequential accelerometer statistical features. MELON effectively captures global and fine-grained mobility patterns by integrating a pre-trained image encoder for rich frequency-domain feature extraction and a Mixture-of-Experts encoder for sequence modeling. We trained and evaluated the MELON model on the multimodal dataset of 126 patients recruited from nine Intensive Care Units at the University of Florida Health Shands Hospital main campus in Gainesville, Florida. Experiments showed that MELON outperforms conventional approaches for 12-hour mobility status estimation with an overall area under the receiver operating characteristic curve (AUROC) of 0.82 (95\%, confidence interval 0.78-0.86). Notably, our experiments also revealed that accelerometer data collected from the wrist provides robust predictive performance compared with data from the ankle, suggesting a single-sensor solution that can reduce patient burden and lower deployment costs...
Related papers
- Two-Stage Representation Learning for Analyzing Movement Behavior Dynamics in People Living with Dementia [44.39545678576284]
This study analyzes home activity data from individuals living with dementia by proposing a two-stage, self-supervised learning approach.<n>The first stage converts time-series activities into text sequences encoded by a pre-trained language model.<n>This PageRank vector captures latent state transitions, effectively compressing complex behaviour data into a succinct form.
arXiv Detail & Related papers (2025-02-13T10:57:25Z) - IoT-Based Real-Time Medical-Related Human Activity Recognition Using Skeletons and Multi-Stage Deep Learning for Healthcare [1.5236380958983642]
The Internet of Things (IoT) and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.<n>Human Motion Recognition (HMR) challenges such as high computational demands, low accuracy, and limited adaptability persist.<n>This study proposes a novel HMR method for MRHA detection, leveraging multi-stage deep learning techniques integrated with IoT.
arXiv Detail & Related papers (2025-01-13T03:41:57Z) - MANGO: Multimodal Acuity traNsformer for intelliGent ICU Outcomes [11.385654412265461]
We present MANGO: the Multimodal Acuity traNsformer for intelliGent ICU outcomes.<n>It is designed to enhance the prediction of patient acuity states, transitions, and the need for life-sustaining therapy.
arXiv Detail & Related papers (2024-12-13T23:51:15Z) - Deep Learning for Motion Classification in Ankle Exoskeletons Using Surface EMG and IMU Signals [0.8388591755871735]
Ankle exoskeletons have garnered considerable interest for their potential to enhance mobility and reduce fall risks.
This paper presents a novel motion prediction framework that integrates three Inertial Measurement Units (IMUs) and eight surface Electromyography (sEMG) sensors.
Our findings reveal that Convolutional Neural Networks (CNNs) slightly outperform Long Short-Term Memory (LSTM) networks on a dataset of five motion tasks.
arXiv Detail & Related papers (2024-11-25T10:51:40Z) - Scaling Wearable Foundation Models [54.93979158708164]
We investigate the scaling properties of sensor foundation models across compute, data, and model size.
Using a dataset of up to 40 million hours of in-situ heart rate, heart rate variability, electrodermal activity, accelerometer, skin temperature, and altimeter per-minute data from over 165,000 people, we create LSM.
Our results establish the scaling laws of LSM for tasks such as imputation, extrapolation, both across time and sensor modalities.
arXiv Detail & Related papers (2024-10-17T15:08:21Z) - Scalable Drift Monitoring in Medical Imaging AI [37.1899538374058]
We develop MMC+, an enhanced framework for scalable drift monitoring.
It builds upon the CheXstray framework that introduced real-time drift detection for medical imaging AI models.
MMC+ offers a reliable and cost-effective alternative to continuous performance monitoring.
arXiv Detail & Related papers (2024-10-17T02:57:35Z) - Machine Learning for ALSFRS-R Score Prediction: Making Sense of the Sensor Data [44.99833362998488]
Amyotrophic Lateral Sclerosis (ALS) is a rapidly progressive neurodegenerative disease that presents individuals with limited treatment options.
The present investigation, spearheaded by the iDPP@CLEF 2024 challenge, focuses on utilizing sensor-derived data obtained through an app.
arXiv Detail & Related papers (2024-07-10T19:17:23Z) - L-SFAN: Lightweight Spatially-focused Attention Network for Pain Behavior Detection [44.016805074560295]
Chronic Low Back Pain (CLBP) afflicts millions globally, significantly impacting individuals' well-being and imposing economic burdens on healthcare systems.
While artificial intelligence (AI) and deep learning offer promising avenues for analyzing pain-related behaviors to improve rehabilitation strategies, current models, including convolutional neural networks (CNNs), have limitations.
We introduce hbox EmoL-SFAN, a lightweight CNN architecture incorporating 2D filters designed to capture the spatial-temporal interplay of data from motion capture and surface electromyography sensors.
arXiv Detail & Related papers (2024-06-07T12:01:37Z) - Temporally-Consistent Koopman Autoencoders for Forecasting Dynamical Systems [38.36312939874359]
We introduce the Temporally-Consistent Koopman Autoencoder (tcKAE)<n>tcKAE generates accurate long-term predictions even with limited and noisy training data.<n>We demonstrate tcKAE's superior performance over state-of-the-art KAE models across a variety of test cases.
arXiv Detail & Related papers (2024-03-19T00:48:25Z) - Multimodal Pretraining of Medical Time Series and Notes [45.89025874396911]
Deep learning models show promise in extracting meaningful patterns, but they require extensive labeled data.
We propose a novel approach employing self-supervised pretraining, focusing on the alignment of clinical measurements and notes.
In downstream tasks, including in-hospital mortality prediction and phenotyping, our model outperforms baselines in settings where only a fraction of the data is labeled.
arXiv Detail & Related papers (2023-12-11T21:53:40Z) - The Potential of Wearable Sensors for Assessing Patient Acuity in
Intensive Care Unit (ICU) [12.359907390320453]
Acuity assessments are vital in critical care settings to provide timely interventions and fair resource allocation.
Traditional acuity scores do not incorporate granular information such as patients' mobility level, which can indicate recovery or deterioration in the ICU.
In this study, we evaluated the impact of integrating mobility data collected from wrist-worn accelerometers with clinical data obtained from EHR for developing an AI-driven acuity assessment score.
arXiv Detail & Related papers (2023-11-03T21:52:05Z) - Inertial Hallucinations -- When Wearable Inertial Devices Start Seeing
Things [82.15959827765325]
We propose a novel approach to multimodal sensor fusion for Ambient Assisted Living (AAL)
We address two major shortcomings of standard multimodal approaches, limited area coverage and reduced reliability.
Our new framework fuses the concept of modality hallucination with triplet learning to train a model with different modalities to handle missing sensors at inference time.
arXiv Detail & Related papers (2022-07-14T10:04:18Z) - Robust and Efficient Medical Imaging with Self-Supervision [80.62711706785834]
We present REMEDIS, a unified representation learning strategy to improve robustness and data-efficiency of medical imaging AI.
We study a diverse range of medical imaging tasks and simulate three realistic application scenarios using retrospective data.
arXiv Detail & Related papers (2022-05-19T17:34:18Z) - MIA-Prognosis: A Deep Learning Framework to Predict Therapy Response [58.0291320452122]
This paper aims at a unified deep learning approach to predict patient prognosis and therapy response.
We formalize the prognosis modeling as a multi-modal asynchronous time series classification task.
Our predictive model could further stratify low-risk and high-risk patients in terms of long-term survival.
arXiv Detail & Related papers (2020-10-08T15:30:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.