Movement-Specific Analysis for FIM Score Classification Using Spatio-Temporal Deep Learning
- URL: http://arxiv.org/abs/2511.10713v1
- Date: Thu, 13 Nov 2025 09:54:32 GMT
- Title: Movement-Specific Analysis for FIM Score Classification Using Spatio-Temporal Deep Learning
- Authors: Jun Masaki, Ariaki Higashi, Naoko Shinagawa, Kazuhiko Hirata, Yuichi Kurita, Akira Furui,
- Abstract summary: The functional independence measure (FIM) is widely used to evaluate patients' physical independence in activities of daily living.<n>We propose an automated FIM score estimation method that utilizes simple exercises different from the designated FIM assessment actions.
- Score: 0.7388859384645262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The functional independence measure (FIM) is widely used to evaluate patients' physical independence in activities of daily living. However, traditional FIM assessment imposes a significant burden on both patients and healthcare professionals. To address this challenge, we propose an automated FIM score estimation method that utilizes simple exercises different from the designated FIM assessment actions. Our approach employs a deep neural network architecture integrating a spatial-temporal graph convolutional network (ST-GCN), bidirectional long short-term memory (BiLSTM), and an attention mechanism to estimate FIM motor item scores. The model effectively captures long-term temporal dependencies and identifies key body-joint contributions through learned attention weights. We evaluated our method in a study of 277 rehabilitation patients, focusing on FIM transfer and locomotion items. Our approach successfully distinguishes between completely independent patients and those requiring assistance, achieving balanced accuracies of 70.09-78.79 % across different FIM items. Additionally, our analysis reveals specific movement patterns that serve as reliable predictors for particular FIM evaluation items.
Related papers
- Estimation of Head Motion in Structural MRI and its Impact on Cortical Thickness Measurements in Retrospective Data [4.072070248526498]
Motion-related artifacts are inevitable in Magnetic Resonance Imaging (MRI)<n>These artifacts can bias automated neuroanatomical metrics such as cortical thickness.<n>Here, we train a 3D convolutional neural network to estimate a summary motion metric in retrospective routine research scans.
arXiv Detail & Related papers (2025-05-29T18:08:59Z) - Early Detection of Multidrug Resistance Using Multivariate Time Series Analysis and Interpretable Patient-Similarity Representations [8.062368743143388]
Multidrug Resistance (MDR) is a critical global health issue, causing increased hospital stays, healthcare costs, and mortality.<n>This study proposes an interpretable Machine Learning framework for MDR prediction, aiming for both accurate inference and enhanced explainability.
arXiv Detail & Related papers (2025-04-24T16:19:13Z) - AnesSuite: A Comprehensive Benchmark and Dataset Suite for Anesthesiology Reasoning in LLMs [62.60333833486799]
AnesSuite is the first dataset suite specifically designed for anesthesiology reasoning in LLMs.<n>Morpheus is the first baseline model collection for anesthesiology reasoning.
arXiv Detail & Related papers (2025-04-03T08:54:23Z) - Validation of Human Pose Estimation and Human Mesh Recovery for Extracting Clinically Relevant Motion Data from Videos [79.62407455005561]
Marker-less motion capture using human pose estimation produces results in-line with the results of both the IMU and MoCap kinematics.<n>While there is still room for improvement when it comes to the quality of the data produced, we believe that this compromise is within the room of error.
arXiv Detail & Related papers (2025-03-18T22:18:33Z) - MELON: Multimodal Mixture-of-Experts with Spectral-Temporal Fusion for Long-Term Mobility Estimation in Critical Care [1.5237145555729716]
We introduce MELON, a novel framework designed to predict 12-hour mobility status in the critical care setting.<n>We trained and evaluated the MELON model on the multimodal dataset of 126 patients recruited from nine Intensive Care Units at the University of Florida Health Shands Hospital main campus in Gainesville, Florida.<n>Results showed that MELON outperforms conventional approaches for 12-hour mobility status estimation.
arXiv Detail & Related papers (2025-03-10T19:47:46Z) - CTPD: Cross-Modal Temporal Pattern Discovery for Enhanced Multimodal Electronic Health Records Analysis [50.56875995511431]
We introduce a Cross-Modal Temporal Pattern Discovery (CTPD) framework, designed to efficiently extract meaningful cross-modal temporal patterns from multimodal EHR data.<n>Our approach introduces shared initial temporal pattern representations which are refined using slot attention to generate temporal semantic embeddings.
arXiv Detail & Related papers (2024-11-01T15:54:07Z) - MS-MANO: Enabling Hand Pose Tracking with Biomechanical Constraints [50.61346764110482]
We integrate a musculoskeletal system with a learnable parametric hand model, MANO, to create MS-MANO.
This model emulates the dynamics of muscles and tendons to drive the skeletal system, imposing physiologically realistic constraints on the resulting torque trajectories.
We also propose a simulation-in-the-loop pose refinement framework, BioPR, that refines the initial estimated pose through a multi-layer perceptron network.
arXiv Detail & Related papers (2024-04-16T02:18:18Z) - MR-STGN: Multi-Residual Spatio Temporal Graph Network Using Attention Fusion for Patient Action Assessment [0.30693357740321775]
We propose an automated approach for patient action assessment using a Multi-Residual Spatio Temporal Graph Network (MR-STGN)<n>The MR-STGN is specifically designed to capture the dynamics of patient actions.<n>We evaluate our model on the UI-PRMD dataset demonstrating its performance in accurately predicting real-time patient action scores.
arXiv Detail & Related papers (2023-12-21T01:09:52Z) - Multimodal Pretraining of Medical Time Series and Notes [45.89025874396911]
Deep learning models show promise in extracting meaningful patterns, but they require extensive labeled data.
We propose a novel approach employing self-supervised pretraining, focusing on the alignment of clinical measurements and notes.
In downstream tasks, including in-hospital mortality prediction and phenotyping, our model outperforms baselines in settings where only a fraction of the data is labeled.
arXiv Detail & Related papers (2023-12-11T21:53:40Z) - MS Lesion Segmentation: Revisiting Weighting Mechanisms for Federated
Learning [92.91544082745196]
Federated learning (FL) has been widely employed for medical image analysis.
FL's performance is limited for multiple sclerosis (MS) lesion segmentation tasks.
We propose the first FL MS lesion segmentation framework via two effective re-weighting mechanisms.
arXiv Detail & Related papers (2022-05-03T14:06:03Z) - Self-Attention Neural Bag-of-Features [103.70855797025689]
We build on the recently introduced 2D-Attention and reformulate the attention learning methodology.
We propose a joint feature-temporal attention mechanism that learns a joint 2D attention mask highlighting relevant information.
arXiv Detail & Related papers (2022-01-26T17:54:14Z) - Continuous Decoding of Daily-Life Hand Movements from Forearm Muscle
Activity for Enhanced Myoelectric Control of Hand Prostheses [78.120734120667]
We introduce a novel method, based on a long short-term memory (LSTM) network, to continuously map forearm EMG activity onto hand kinematics.
Ours is the first reported work on the prediction of hand kinematics that uses this challenging dataset.
Our results suggest that the presented method is suitable for the generation of control signals for the independent and proportional actuation of the multiple DOFs of state-of-the-art hand prostheses.
arXiv Detail & Related papers (2021-04-29T00:11:32Z) - Mapping individual differences in cortical architecture using multi-view
representation learning [0.0]
We introduce a novel machine learning method which allows combining the activation-and connectivity-based information respectively measured through task-fMRI and resting-state fMRI.
It combines a multi-view deep autoencoder which is designed to fuse the two fMRI modalities into a joint representation space within which a predictive model is trained to guess a scalar score that characterizes the patient.
arXiv Detail & Related papers (2020-04-01T09:01:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.