Deep Learning for Human Locomotion Analysis in Lower-Limb Exoskeletons: A Comparative Study
- URL: http://arxiv.org/abs/2503.16904v1
- Date: Fri, 21 Mar 2025 07:12:44 GMT
- Title: Deep Learning for Human Locomotion Analysis in Lower-Limb Exoskeletons: A Comparative Study
- Authors: Omar Coser, Christian Tamantini, Matteo Tortora, Leonardo Furia, Rosa Sicilia, Loredana Zollo, Paolo Soda,
- Abstract summary: This paper presents an experimental comparison between eight deep neural network backbones to predict high-level locomotion parameters.<n>The LSTM achieved high terrain classification accuracy (0.94 +- 0.04) and precise ramp slope (1.95 +- 0.58deg) and the CNN-LSTM a stair height (15.65 +- 7.40 mm)<n>The system operates with 2 ms inference time, supporting real-time applications.
- Score: 1.3569491184708433
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Wearable robotics for lower-limb assistance have become a pivotal area of research, aiming to enhance mobility for individuals with physical impairments or augment the performance of able-bodied users. Accurate and adaptive control systems are essential to ensure seamless interaction between the wearer and the robotic device, particularly when navigating diverse and dynamic terrains. Despite the recent advances in neural networks for time series analysis, no attempts have been directed towards the classification of ground conditions, categorized into five classes and subsequently determining the ramp's slope and stair's height. In this respect, this paper presents an experimental comparison between eight deep neural network backbones to predict high-level locomotion parameters across diverse terrains. All the models are trained on the publicly available CAMARGO 2021 dataset. IMU-only data equally or outperformed IMU+EMG inputs, promoting a cost-effective and efficient design. Indeeds, using three IMU sensors, the LSTM achieved high terrain classification accuracy (0.94 +- 0.04) and precise ramp slope (1.95 +- 0.58{\deg}) and the CNN-LSTM a stair height (15.65 +- 7.40 mm) estimations. As a further contribution, SHAP analysis justified sensor reduction without performance loss, ensuring a lightweight setup. The system operates with ~2 ms inference time, supporting real-time applications. The code is code available at https://github.com/cosbidev/Human-Locomotion-Identification.
Related papers
- K2MUSE: A human lower limb multimodal dataset under diverse conditions for facilitating rehabilitation robotics [15.245241949892584]
The K2MUSE dataset includes a comprehensive collection of multimodal data, comprising kinematic, kinetic, amplitude-mode ultrasound (AUS), and surface electromyography (sEMG) measurements.
This dataset offers a new resource for designing control frameworks for rehabilitation robots and conducting biomechanical analyses of lower limb locomotion.
arXiv Detail & Related papers (2025-04-20T13:03:56Z) - MT-NAM: An Efficient and Adaptive Model for Epileptic Seizure Detection [51.87482627771981]
Micro Tree-based NAM (MT-NAM) is a distilled model based on the recently proposed Neural Additive Models (NAM)<n>MT-NAM achieves a remarkable 100$times$ improvement in inference speed compared to standard NAM, without compromising accuracy.<n>We evaluate our approach on the CHB-MIT scalp EEG dataset, which includes recordings from 24 patients with varying numbers of sessions and seizures.
arXiv Detail & Related papers (2025-03-11T10:14:53Z) - Generalizable and Fast Surrogates: Model Predictive Control of Articulated Soft Robots using Physics-Informed Neural Networks [4.146337610044239]
We propose physics-informed neural networks (PINNs) for articulated soft robots (ASRs) with a focus on data efficiency.<n>The amount of expensive real-world training data is reduced to a minimum - one dataset in one system domain.<n>The prediction speed of an accurate FP model is improved with the PINN by up to a factor of 466 at slightly reduced accuracy.
arXiv Detail & Related papers (2025-02-04T01:16:33Z) - Deep Learning for Motion Classification in Ankle Exoskeletons Using Surface EMG and IMU Signals [0.8388591755871735]
Ankle exoskeletons have garnered considerable interest for their potential to enhance mobility and reduce fall risks.
This paper presents a novel motion prediction framework that integrates three Inertial Measurement Units (IMUs) and eight surface Electromyography (sEMG) sensors.
Our findings reveal that Convolutional Neural Networks (CNNs) slightly outperform Long Short-Term Memory (LSTM) networks on a dataset of five motion tasks.
arXiv Detail & Related papers (2024-11-25T10:51:40Z) - Comparison of gait phase detection using traditional machine learning
and deep learning techniques [3.11526333124308]
This study proposes a few Machine Learning (ML) based models on lower-limb EMG data for human walking.
The results show up to 75% average accuracy for traditional ML models and 79% for Deep Learning (DL) model.
arXiv Detail & Related papers (2024-03-07T10:05:09Z) - Automated classification of pre-defined movement patterns: A comparison
between GNSS and UWB technology [55.41644538483948]
Real-time location systems (RTLS) allow for collecting data from human movement patterns.
The current study aims to design and evaluate an automated framework to classify human movement patterns in small areas.
arXiv Detail & Related papers (2023-03-10T14:46:42Z) - Collaborative Learning with a Drone Orchestrator [79.75113006257872]
A swarm of intelligent wireless devices train a shared neural network model with the help of a drone.
The proposed framework achieves a significant speedup in training, leading to an average 24% and 87% saving in the drone hovering time.
arXiv Detail & Related papers (2023-03-03T23:46:25Z) - Inertial Hallucinations -- When Wearable Inertial Devices Start Seeing
Things [82.15959827765325]
We propose a novel approach to multimodal sensor fusion for Ambient Assisted Living (AAL)
We address two major shortcomings of standard multimodal approaches, limited area coverage and reduced reliability.
Our new framework fuses the concept of modality hallucination with triplet learning to train a model with different modalities to handle missing sensors at inference time.
arXiv Detail & Related papers (2022-07-14T10:04:18Z) - Neural Moving Horizon Estimation for Robust Flight Control [6.023276947115864]
Estimating and reacting to external disturbances is crucial for robust flight control of quadrotors.
We propose a neural moving horizon estimator (NeuroMHE) that can automatically tune the MHE parameters modeled by a neural network.
NeuroMHE outperforms the state-of-the-art estimator with force estimation error reductions of up to 49.4%.
arXiv Detail & Related papers (2022-06-21T13:43:24Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z) - Deep learning-based classification of fine hand movements from low
frequency EEG [5.414308305392762]
The classification of different fine hand movements from EEG signals represents a relevant research challenge.
We trained and tested a newly proposed convolutional neural network (CNN)
CNN achieved good performance in both datasets and they were similar or superior to the baseline models.
arXiv Detail & Related papers (2020-11-13T07:16:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.