Deep learning empowered sensor fusion boosts infant movement classification
- URL: http://arxiv.org/abs/2406.09014v5
- Date: Thu, 07 Nov 2024 15:41:04 GMT
- Title: Deep learning empowered sensor fusion boosts infant movement classification
- Authors: Tomas Kulvicius, Dajie Zhang, Luise Poustka, Sven Bölte, Lennart Jahn, Sarah Flügge, Marc Kraft, Markus Zweckstetter, Karin Nielsen-Saines, Florentin Wörgötter, Peter B Marschik,
- Abstract summary: We propose a sensor fusion approach for assessing fidgety movements (FMs)
Various combinations and two sensor fusion approaches were tested to evaluate whether a multi-sensor system outperforms single modality assessments.
The performance of the three-sensor fusion (classification accuracy of 94.5%) was significantly higher than that of any single modality evaluated.
- Score: 2.5114056348393197
- License:
- Abstract: To assess the integrity of the developing nervous system, the Prechtl general movement assessment (GMA) is recognized for its clinical value in diagnosing neurological impairments in early infancy. GMA has been increasingly augmented through machine learning approaches intending to scale-up its application, circumvent costs in the training of human assessors and further standardize classification of spontaneous motor patterns. Available deep learning tools, all of which are based on single sensor modalities, are however still considerably inferior to that of well-trained human assessors. These approaches are hardly comparable as all models are designed, trained and evaluated on proprietary/silo-data sets. With this study we propose a sensor fusion approach for assessing fidgety movements (FMs). FMs were recorded from 51 typically developing participants. We compared three different sensor modalities (pressure, inertial, and visual sensors). Various combinations and two sensor fusion approaches (late and early fusion) for infant movement classification were tested to evaluate whether a multi-sensor system outperforms single modality assessments. Convolutional neural network (CNN) architectures were used to classify movement patterns. The performance of the three-sensor fusion (classification accuracy of 94.5%) was significantly higher than that of any single modality evaluated. We show that the sensor fusion approach is a promising avenue for automated classification of infant motor patterns. The development of a robust sensor fusion system may significantly enhance AI-based early recognition of neurofunctions, ultimately facilitating automated early detection of neurodevelopmental conditions.
Related papers
- ESDS: AI-Powered Early Stunting Detection and Monitoring System using Edited Radius-SMOTE Algorithm [1.6874375111244329]
Stunting detection is a significant issue in Indonesian healthcare.
In regions with a high prevalence of stunting, identifying children in need of treatment is critical.
The diagnostic process often raises challenges, such as the lack of experience in medical workers.
This paper employs machine learning for stunting detection based on sensor readings.
arXiv Detail & Related papers (2024-09-21T11:15:13Z) - Artificial Neural Networks-based Real-time Classification of ENG Signals for Implanted Nerve Interfaces [7.335832236913667]
We explore four types of artificial neural networks (ANNs) to extract sensory stimuli from the electroneurographic (ENG) signal measured in the sciatic nerve of rats.
Different sizes of the data sets are considered to analyze the feasibility of the investigated ANNs for real-time classification.
Our results show that some ANNs are more suitable for real-time applications, being capable of achieving accuracies over $90%$ for signal windows of $100$ and $200,$ms with a low enough processing time to be effective for pathology recovery.
arXiv Detail & Related papers (2024-03-29T15:23:30Z) - Self-similarity Prior Distillation for Unsupervised Remote Physiological Measurement [39.0083078989343]
We propose a Self-Similarity Prior Distillation (SSPD) framework for unsupervised r estimation.
SSPD capitalizes on the intrinsic self-similarity of cardiac activities.
It achieves comparable or even superior performance compared to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2023-11-09T02:24:51Z) - Deep convolutional neural networks for cyclic sensor data [0.0]
This study focuses on sensor-based condition monitoring and explores the application of deep learning techniques.
Our investigation involves comparing the performance of three models: a baseline model employing conventional methods, a single CNN model with early sensor fusion, and a two-lane CNN model (2L-CNN) with late sensor fusion.
arXiv Detail & Related papers (2023-08-14T07:51:15Z) - A Real-time Human Pose Estimation Approach for Optimal Sensor Placement
in Sensor-based Human Activity Recognition [63.26015736148707]
This paper introduces a novel methodology to resolve the issue of optimal sensor placement for Human Activity Recognition.
The derived skeleton data provides a unique strategy for identifying the optimal sensor location.
Our findings indicate that the vision-based method for sensor placement offers comparable results to the conventional deep learning approach.
arXiv Detail & Related papers (2023-07-06T10:38:14Z) - Infant movement classification through pressure distribution analysis [2.18942830965993]
We proposed an innovative non-intrusive approach using a pressure sensing device to classify infant general movements (GMs)
We tested the feasibility of using pressure data to differentiate typical GM patterns of the ''fidgety period'' (i.e., fidgety movements) vs. the ''pre-fidgety period'' (i.e., writhing movements)
arXiv Detail & Related papers (2022-07-26T16:14:19Z) - Neuro-BERT: Rethinking Masked Autoencoding for Self-supervised Neurological Pretraining [24.641328814546842]
We present Neuro-BERT, a self-supervised pre-training framework of neurological signals based on masked autoencoding in the Fourier domain.
We propose a novel pre-training task dubbed Fourier Inversion Prediction (FIP), which randomly masks out a portion of the input signal and then predicts the missing information.
By evaluating our method on several benchmark datasets, we show that Neuro-BERT improves downstream neurological-related tasks by a large margin.
arXiv Detail & Related papers (2022-04-20T16:48:18Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - MMLatch: Bottom-up Top-down Fusion for Multimodal Sentiment Analysis [84.7287684402508]
Current deep learning approaches for multimodal fusion rely on bottom-up fusion of high and mid-level latent modality representations.
Models of human perception highlight the importance of top-down fusion, where high-level representations affect the way sensory inputs are perceived.
We propose a neural architecture that captures top-down cross-modal interactions, using a feedback mechanism in the forward pass during network training.
arXiv Detail & Related papers (2022-01-24T17:48:04Z) - Detecting Parkinsonian Tremor from IMU Data Collected In-The-Wild using
Deep Multiple-Instance Learning [59.74684475991192]
Parkinson's Disease (PD) is a slowly evolving neuro-logical disease that affects about 1% of the population above 60 years old.
PD symptoms include tremor, rigidity and braykinesia.
We present a method for automatically identifying tremorous episodes related to PD, based on IMU signals captured via a smartphone device.
arXiv Detail & Related papers (2020-05-06T09:02:30Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.