Real-Time Human Pose Estimation on a Smart Walker using Convolutional
Neural Networks
- URL: http://arxiv.org/abs/2106.14739v1
- Date: Mon, 28 Jun 2021 14:11:48 GMT
- Title: Real-Time Human Pose Estimation on a Smart Walker using Convolutional
Neural Networks
- Authors: Manuel Palermo, Sara Moccia, Lucia Migliorelli, Emanuele Frontoni,
Cristina P. Santos
- Abstract summary: We present a novel approach to patient monitoring and data-driven human-in-the-loop control in the context of smart walkers.
It is able to extract a complete and compact body representation in real-time and from inexpensive sensors.
Despite promising results, more data should be collected on users with impairments to assess its performance as a rehabilitation tool in real-world scenarios.
- Score: 4.076099054649463
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Rehabilitation is important to improve quality of life for mobility-impaired
patients. Smart walkers are a commonly used solution that should embed
automatic and objective tools for data-driven human-in-the-loop control and
monitoring. However, present solutions focus on extracting few specific metrics
from dedicated sensors with no unified full-body approach. We investigate a
general, real-time, full-body pose estimation framework based on two RGB+D
camera streams with non-overlapping views mounted on a smart walker equipment
used in rehabilitation. Human keypoint estimation is performed using a
two-stage neural network framework. The 2D-Stage implements a detection module
that locates body keypoints in the 2D image frames. The 3D-Stage implements a
regression module that lifts and relates the detected keypoints in both cameras
to the 3D space relative to the walker. Model predictions are low-pass filtered
to improve temporal consistency. A custom acquisition method was used to obtain
a dataset, with 14 healthy subjects, used for training and evaluating the
proposed framework offline, which was then deployed on the real walker
equipment. An overall keypoint detection error of 3.73 pixels for the 2D-Stage
and 44.05mm for the 3D-Stage were reported, with an inference time of 26.6ms
when deployed on the constrained hardware of the walker. We present a novel
approach to patient monitoring and data-driven human-in-the-loop control in the
context of smart walkers. It is able to extract a complete and compact body
representation in real-time and from inexpensive sensors, serving as a common
base for downstream metrics extraction solutions, and Human-Robot interaction
applications. Despite promising results, more data should be collected on users
with impairments, to assess its performance as a rehabilitation tool in
real-world scenarios.
Related papers
- CameraHMR: Aligning People with Perspective [54.05758012879385]
We address the challenge of accurate 3D human pose and shape estimation from monocular images.
Existing training datasets containing real images with pseudo ground truth (pGT) use SMPLify to fit SMPL to sparse 2D joint locations.
We make two contributions that improve pGT accuracy.
arXiv Detail & Related papers (2024-11-12T19:12:12Z) - Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - Occlusion-Aware 3D Motion Interpretation for Abnormal Behavior Detection [10.782354892545651]
We present OAD2D, which discriminates against motion abnormalities based on reconstructing 3D coordinates of mesh vertices and human joints from monocular videos.
We reformulate the abnormal posture estimation by coupling it with Motion to Text (M2T) model in which, the VQVAE is employed to quantize motion features.
Our approach demonstrates the robustness of abnormal behavior detection against severe and self-occlusions, as it reconstructs human motion trajectories in global coordinates.
arXiv Detail & Related papers (2024-07-23T18:41:16Z) - UPose3D: Uncertainty-Aware 3D Human Pose Estimation with Cross-View and Temporal Cues [55.69339788566899]
UPose3D is a novel approach for multi-view 3D human pose estimation.
It improves robustness and flexibility without requiring direct 3D annotations.
arXiv Detail & Related papers (2024-04-23T00:18:00Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Deep learning-based approaches for human motion decoding in smart
walkers for rehabilitation [3.8791511769387634]
Smart walkers should be able to decode human motion and needs, as early as possible.
Current walkers decode motion intention using information of wearable or embedded sensors.
A contactless approach is proposed, addressing human motion decoding as an early action recognition/detection problematic.
arXiv Detail & Related papers (2023-01-13T14:29:44Z) - A Flexible-Frame-Rate Vision-Aided Inertial Object Tracking System for
Mobile Devices [3.4836209951879957]
We propose a flexible-frame-rate object pose estimation and tracking system for mobile devices.
Inertial measurement unit (IMU) pose propagation is performed on the client side for high speed tracking, and RGB image-based 3D pose estimation is performed on the server side.
Our system supports flexible frame rates up to 120 FPS and guarantees high precision and real-time tracking on low-end devices.
arXiv Detail & Related papers (2022-10-22T15:26:50Z) - Robot Self-Calibration Using Actuated 3D Sensors [0.0]
This paper treats robot calibration as an offline SLAM problem, where scanning poses are linked to a fixed point in space by a moving kinematic chain.
As such, the presented framework allows robot calibration using nothing but an arbitrary eye-in-hand depth sensor.
A detailed evaluation of the system is shown on a real robot with various attached 3D sensors.
arXiv Detail & Related papers (2022-06-07T16:35:08Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Real-Time Multi-View 3D Human Pose Estimation using Semantic Feedback to
Smart Edge Sensors [28.502280038100167]
2D joint detection for each camera view is performed locally on a dedicated embedded inference processor.
3D poses are recovered from 2D joints on a central backend, based on triangulation and a body model.
The whole pipeline is capable of real-time operation.
arXiv Detail & Related papers (2021-06-28T14:00:00Z) - Risk-Averse MPC via Visual-Inertial Input and Recurrent Networks for
Online Collision Avoidance [95.86944752753564]
We propose an online path planning architecture that extends the model predictive control (MPC) formulation to consider future location uncertainties.
Our algorithm combines an object detection pipeline with a recurrent neural network (RNN) which infers the covariance of state estimates.
The robustness of our methods is validated on complex quadruped robot dynamics and can be generally applied to most robotic platforms.
arXiv Detail & Related papers (2020-07-28T07:34:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.