A Single RGB Camera Based Gait Analysis with a Mobile Tele-Robot for
Healthcare
- URL: http://arxiv.org/abs/2002.04700v4
- Date: Sun, 15 Mar 2020 03:27:52 GMT
- Title: A Single RGB Camera Based Gait Analysis with a Mobile Tele-Robot for
Healthcare
- Authors: Ziyang Wang
- Abstract summary: This work focuses on the analysis of gait, which is widely adopted for joint correction and assessing any lower limb or spinal problem.
On the hardware side, we design a novel marker-less gait analysis device using a low-cost RGB camera mounted on a mobile tele-robot.
- Score: 9.992387025633805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing awareness of high-quality life, there is a growing need
for health monitoring devices running robust algorithms in home environment.
Health monitoring technologies enable real-time analysis of users' health
status, offering long-term healthcare support and reducing hospitalization
time. The purpose of this work is twofold, the software focuses on the analysis
of gait, which is widely adopted for joint correction and assessing any lower
limb or spinal problem. On the hardware side, we design a novel marker-less
gait analysis device using a low-cost RGB camera mounted on a mobile
tele-robot. As gait analysis with a single camera is much more challenging
compared to previous works utilizing multi-cameras, a RGB-D camera or wearable
sensors, we propose using vision-based human pose estimation approaches. More
specifically, based on the output of two state-of-the-art human pose estimation
models (Openpose and VNect), we devise measurements for four bespoke gait
parameters: inversion/eversion, dorsiflexion/plantarflexion, ankle and foot
progression angles. We thereby classify walking patterns into normal,
supination, pronation and limp. We also illustrate how to run the purposed
machine learning models in low-resource environments such as a single
entry-level CPU. Experiments show that our single RGB camera method achieves
competitive performance compared to state-of-the-art methods based on depth
cameras or multi-camera motion capture system, at smaller hardware costs.
Related papers
- Multi-Camera Hand-Eye Calibration for Human-Robot Collaboration in Industrial Robotic Workcells [3.76054468268713]
In industrial scenarios, effective human-robot collaboration relies on multi-camera systems to robustly monitor human operators.
We introduce an innovative and robust multi-camera hand-eye calibration method, designed to optimize each camera's pose relative to both the robot's base and to each other camera.
We demonstrate the superior performance of our method through comprehensive experiments employing the METRIC dataset and real-world data collected on industrial scenarios.
arXiv Detail & Related papers (2024-06-17T10:23:30Z) - Real-time, accurate, and open source upper-limb musculoskeletal analysis using a single RGBD camera [0.14999444543328289]
Biomechanical biofeedback may enhance rehabilitation and provide clinicians with more objective task evaluation.
Our open-source approach offers a user-friendly solution for high-fidelity upper-limb kinematics using a single low-cost RGBD camera.
arXiv Detail & Related papers (2024-06-14T13:20:05Z) - VICAN: Very Efficient Calibration Algorithm for Large Camera Networks [49.17165360280794]
We introduce a novel methodology that extends Pose Graph Optimization techniques.
We consider the bipartite graph encompassing cameras, object poses evolving dynamically, and camera-object relative transformations at each time step.
Our framework retains compatibility with traditional PGO solvers, but its efficacy benefits from a custom-tailored optimization scheme.
arXiv Detail & Related papers (2024-03-25T17:47:03Z) - XAI-based gait analysis of patients walking with Knee-Ankle-Foot
orthosis using video cameras [1.8749305679160366]
This paper presents a novel system for gait analysis robust to camera movements and providing explanations for its output.
The proposed system employs super-resolution and pose estimation during pre-processing.
It then identifies the seven features - Stride Length, Step Length and Duration of single support of orthotic and non-orthotic leg, Cadence, and Speed.
arXiv Detail & Related papers (2024-02-25T19:05:10Z) - EventTransAct: A video transformer-based framework for Event-camera
based action recognition [52.537021302246664]
Event cameras offer new opportunities compared to standard action recognition in RGB videos.
In this study, we employ a computationally efficient model, namely the video transformer network (VTN), which initially acquires spatial embeddings per event-frame.
In order to better adopt the VTN for the sparse and fine-grained nature of event data, we design Event-Contrastive Loss ($mathcalL_EC$) and event-specific augmentations.
arXiv Detail & Related papers (2023-08-25T23:51:07Z) - Next-generation Surgical Navigation: Marker-less Multi-view 6DoF Pose
Estimation of Surgical Instruments [66.74633676595889]
We present a multi-camera capture setup consisting of static and head-mounted cameras.
Second, we publish a multi-view RGB-D video dataset of ex-vivo spine surgeries, captured in a surgical wet lab and a real operating theatre.
Third, we evaluate three state-of-the-art single-view and multi-view methods for the task of 6DoF pose estimation of surgical instruments.
arXiv Detail & Related papers (2023-05-05T13:42:19Z) - Spectral Sensitivity Estimation Without a Camera [6.599344783327053]
A number of problems in computer vision and related fields would be mitigated if camera spectral sensitivities were known.
We propose a framework for spectral sensitivity estimation that does not require any hardware, but also does not require physical access to the camera itself.
We provide our code and predicted sensitivities for 1,000+ cameras, and discuss which tasks can become trivial when camera responses are available.
arXiv Detail & Related papers (2023-04-23T06:18:07Z) - RGB2Hands: Real-Time Tracking of 3D Hand Interactions from Monocular RGB
Video [76.86512780916827]
We present the first real-time method for motion capture of skeletal pose and 3D surface geometry of hands from a single RGB camera.
In order to address the inherent depth ambiguities in RGB data, we propose a novel multi-task CNN.
We experimentally verify the individual components of our RGB two-hand tracking and 3D reconstruction pipeline.
arXiv Detail & Related papers (2021-06-22T12:53:56Z) - Multi-view Human Pose and Shape Estimation Using Learnable Volumetric
Aggregation [0.0]
We propose a learnable aggregation approach to reconstruct 3D human body pose and shape from calibrated multi-view images.
Compared to previous approaches, our framework shows higher accuracy and greater promise for real-time prediction, given its cost efficiency.
arXiv Detail & Related papers (2020-11-26T18:33:35Z) - Infrastructure-based Multi-Camera Calibration using Radial Projections [117.22654577367246]
Pattern-based calibration techniques can be used to calibrate the intrinsics of the cameras individually.
Infrastucture-based calibration techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM or Structure-from-Motion.
We propose to fully calibrate a multi-camera system from scratch using an infrastructure-based approach.
arXiv Detail & Related papers (2020-07-30T09:21:04Z) - Active Perception with A Monocular Camera for Multiscopic Vision [50.370074098619185]
We design a multiscopic vision system that utilizes a low-cost monocular RGB camera to acquire accurate depth estimation for robotic applications.
Unlike multi-view stereo with images captured at unconstrained camera poses, the proposed system actively controls a robot arm with a mounted camera to capture a sequence of images in horizontally or vertically aligned positions with the same parallax.
arXiv Detail & Related papers (2020-01-22T08:46:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.