Deep learning-based approaches for human motion decoding in smart
walkers for rehabilitation
- URL: http://arxiv.org/abs/2301.05575v1
- Date: Fri, 13 Jan 2023 14:29:44 GMT
- Title: Deep learning-based approaches for human motion decoding in smart
walkers for rehabilitation
- Authors: Carolina Gon\c{c}alves, Jo\~ao M. Lopes, Sara Moccia, Daniele
Berardini, Lucia Migliorelli, and Cristina P. Santos
- Abstract summary: Smart walkers should be able to decode human motion and needs, as early as possible.
Current walkers decode motion intention using information of wearable or embedded sensors.
A contactless approach is proposed, addressing human motion decoding as an early action recognition/detection problematic.
- Score: 3.8791511769387634
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Gait disabilities are among the most frequent worldwide. Their treatment
relies on rehabilitation therapies, in which smart walkers are being introduced
to empower the user's recovery and autonomy, while reducing the clinicians
effort. For that, these should be able to decode human motion and needs, as
early as possible. Current walkers decode motion intention using information of
wearable or embedded sensors, namely inertial units, force and hall sensors,
and lasers, whose main limitations imply an expensive solution or hinder the
perception of human movement. Smart walkers commonly lack a seamless
human-robot interaction, which intuitively understands human motions. A
contactless approach is proposed in this work, addressing human motion decoding
as an early action recognition/detection problematic, using RGB-D cameras. We
studied different deep learning-based algorithms, organised in three different
approaches, to process lower body RGB-D video sequences, recorded from an
embedded camera of a smart walker, and classify them into 4 classes (stop,
walk, turn right/left). A custom dataset involving 15 healthy participants
walking with the device was acquired and prepared, resulting in 28800 balanced
RGB-D frames, to train and evaluate the deep networks. The best results were
attained by a convolutional neural network with a channel attention mechanism,
reaching accuracy values of 99.61% and above 93%, for offline early
detection/recognition and trial simulations, respectively. Following the
hypothesis that human lower body features encode prominent information,
fostering a more robust prediction towards real-time applications, the
algorithm focus was also evaluated using Dice metric, leading to values
slightly higher than 30%. Promising results were attained for early action
detection as a human motion decoding strategy, with enhancements in the focus
of the proposed architectures.
Related papers
- COIN: Control-Inpainting Diffusion Prior for Human and Camera Motion Estimation [98.05046790227561]
COIN is a control-inpainting motion diffusion prior that enables fine-grained control to disentangle human and camera motions.
COIN outperforms the state-of-the-art methods in terms of global human motion estimation and camera motion estimation.
arXiv Detail & Related papers (2024-08-29T10:36:29Z) - Aligning Human Motion Generation with Human Perceptions [51.831338643012444]
We propose a data-driven approach to bridge the gap by introducing a large-scale human perceptual evaluation dataset, MotionPercept, and a human motion critic model, MotionCritic.
Our critic model offers a more accurate metric for assessing motion quality and could be readily integrated into the motion generation pipeline.
arXiv Detail & Related papers (2024-07-02T14:01:59Z) - Conformalized Teleoperation: Confidently Mapping Human Inputs to High-Dimensional Robot Actions [4.855534476454559]
We learn a mapping from low-dimensional human inputs to high-dimensional robot actions.
Our key idea is to adapt the assistive map at training time to additionally estimate high-dimensional action quantiles.
We propose an uncertainty-interval-based mechanism for detecting high-uncertainty user inputs and robot states.
arXiv Detail & Related papers (2024-06-11T23:16:46Z) - A Real-time Human Pose Estimation Approach for Optimal Sensor Placement
in Sensor-based Human Activity Recognition [63.26015736148707]
This paper introduces a novel methodology to resolve the issue of optimal sensor placement for Human Activity Recognition.
The derived skeleton data provides a unique strategy for identifying the optimal sensor location.
Our findings indicate that the vision-based method for sensor placement offers comparable results to the conventional deep learning approach.
arXiv Detail & Related papers (2023-07-06T10:38:14Z) - Decomposed Human Motion Prior for Video Pose Estimation via Adversarial
Training [7.861513525154702]
We propose to decompose holistic motion prior to joint motion prior, making it easier for neural networks to learn from prior knowledge.
We also utilize a novel regularization loss to balance accuracy and smoothness introduced by motion prior.
Our method achieves 9% lower PA-MPJPE and 29% lower acceleration error than previous methods tested on 3DPW.
arXiv Detail & Related papers (2023-05-30T04:53:34Z) - Differentiable Frequency-based Disentanglement for Aerial Video Action
Recognition [56.91538445510214]
We present a learning algorithm for human activity recognition in videos.
Our approach is designed for UAV videos, which are mainly acquired from obliquely placed dynamic cameras.
We conduct extensive experiments on the UAV Human dataset and the NEC Drone dataset.
arXiv Detail & Related papers (2022-09-15T22:16:52Z) - Incremental Learning Techniques for Online Human Activity Recognition [0.0]
We propose a human activity recognition (HAR) approach for the online prediction of physical movements.
We develop a HAR system containing monitoring software and a mobile application that collects accelerometer and gyroscope data.
Six incremental learning algorithms are employed and evaluated in this work and compared with several batch learning algorithms commonly used for developing offline HAR systems.
arXiv Detail & Related papers (2021-09-20T11:33:09Z) - Contact-Aware Retargeting of Skinned Motion [49.71236739408685]
This paper introduces a motion estimation method that preserves self-contacts and prevents interpenetration.
The method identifies self-contacts and ground contacts in the input motion, and optimize the motion to apply to the output skeleton.
In experiments, our results quantitatively outperform previous methods and we conduct a user study where our retargeted motions are rated as higher-quality than those produced by recent works.
arXiv Detail & Related papers (2021-09-15T17:05:02Z) - Real-Time Human Pose Estimation on a Smart Walker using Convolutional
Neural Networks [4.076099054649463]
We present a novel approach to patient monitoring and data-driven human-in-the-loop control in the context of smart walkers.
It is able to extract a complete and compact body representation in real-time and from inexpensive sensors.
Despite promising results, more data should be collected on users with impairments to assess its performance as a rehabilitation tool in real-world scenarios.
arXiv Detail & Related papers (2021-06-28T14:11:48Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z) - Deep Learning of Movement Intent and Reaction Time for EEG-informed
Adaptation of Rehabilitation Robots [0.0]
adaptation is a crucial mechanism for rehabilitation robots in promoting motor learning.
We propose a deep convolutional neural network (CNN) that uses electroencephalography (EEG) as an objective measurement of two kinematics components.
Our results demonstrate how individual movement components implicated in distinct types of motor learning can be predicted from synchronized EEG data.
arXiv Detail & Related papers (2020-02-18T13:20:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.