XAI-based gait analysis of patients walking with Knee-Ankle-Foot
orthosis using video cameras
- URL: http://arxiv.org/abs/2402.16175v1
- Date: Sun, 25 Feb 2024 19:05:10 GMT
- Title: XAI-based gait analysis of patients walking with Knee-Ankle-Foot
orthosis using video cameras
- Authors: Arnav Mishra, Aditi Shetkar, Ganesh M. Bapat, Rajdeep Ojha, Tanmay
Tulsidas Verlekar
- Abstract summary: This paper presents a novel system for gait analysis robust to camera movements and providing explanations for its output.
The proposed system employs super-resolution and pose estimation during pre-processing.
It then identifies the seven features - Stride Length, Step Length and Duration of single support of orthotic and non-orthotic leg, Cadence, and Speed.
- Score: 1.8749305679160366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent technological advancements in artificial intelligence and computer
vision have enabled gait analysis on portable devices such as cell phones.
However, most state-of-the-art vision-based systems still impose numerous
constraints for capturing a patient's video, such as using a static camera and
maintaining a specific distance from it. While these constraints are manageable
under professional observation, they pose challenges in home settings. Another
issue with most vision-based systems is their output, typically a
classification label and confidence value, whose reliability is often
questioned by medical professionals. This paper addresses these challenges by
presenting a novel system for gait analysis robust to camera movements and
providing explanations for its output. The study utilizes a dataset comprising
videos of subjects wearing two types of Knee Ankle Foot Orthosis (KAFO), namely
"Locked Knee" and "Semi-flexion," for mobility, along with metadata and ground
truth for explanations. The ground truth highlights the statistical
significance of seven features captured using motion capture systems to
differentiate between the two gaits. To address camera movement challenges, the
proposed system employs super-resolution and pose estimation during
pre-processing. It then identifies the seven features - Stride Length, Step
Length and Duration of single support of orthotic and non-orthotic leg,
Cadence, and Speed - using the skeletal output of pose estimation. These
features train a multi-layer perceptron, with its output explained by
highlighting the features' contribution to classification. While most
state-of-the-art systems struggle with processing the video or training on the
proposed dataset, our system achieves an average accuracy of 94%. The model's
explainability is validated using ground truth and can be considered reliable.
Related papers
- Learning Physics From Video: Unsupervised Physical Parameter Estimation for Continuous Dynamical Systems [49.11170948406405]
State-of-the-art in automatic parameter estimation from video is addressed by training supervised deep networks on large datasets.
We propose a method to estimate the physical parameters of any known, continuous governing equation from single videos.
arXiv Detail & Related papers (2024-10-02T09:44:54Z) - Intelligent Knee Sleeves: A Real-time Multimodal Dataset for 3D Lower
Body Motion Estimation Using Smart Textile [2.2008680042670123]
We present a multimodal dataset with benchmarks collected using a novel pair of Intelligent Knee Sleeves for human pose estimation.
Our system utilizes synchronized datasets that comprise time-series data from the Knee Sleeves and the corresponding ground truth labels from the visualized motion capture camera system.
We employ these to generate 3D human models solely based on the wearable data of individuals performing different activities.
arXiv Detail & Related papers (2023-10-02T00:34:21Z) - Pose2Gait: Extracting Gait Features from Monocular Video of Individuals
with Dementia [3.2739089842471136]
Video-based ambient monitoring of gait for older adults with dementia has the potential to detect negative changes in health.
Computer vision-based pose tracking models can process video data automatically and extract joint locations.
These models are not optimized for gait analysis on older adults or clinical populations.
arXiv Detail & Related papers (2023-08-22T14:59:17Z) - Learning to Exploit Temporal Structure for Biomedical Vision-Language
Processing [53.89917396428747]
Self-supervised learning in vision-language processing exploits semantic alignment between imaging and text modalities.
We explicitly account for prior images and reports when available during both training and fine-tuning.
Our approach, named BioViL-T, uses a CNN-Transformer hybrid multi-image encoder trained jointly with a text model.
arXiv Detail & Related papers (2023-01-11T16:35:33Z) - Automated Mobility Context Detection with Inertial Signals [7.71058263701836]
The primary goal of this paper is the investigation of context detection for remote monitoring of daily motor functions.
We aim to understand whether inertial signals sampled with wearable accelerometers, provide reliable information to classify gait-related activities as either indoor or outdoor.
arXiv Detail & Related papers (2022-05-16T09:34:43Z) - Federated Remote Physiological Measurement with Imperfect Data [10.989271258156883]
Growing need for technology that supports remote healthcare is being highlighted by an aging population and the COVID-19 pandemic.
In health-related machine learning applications the ability to learn predictive models without data leaving a private device is attractive.
Camera-based remote physiological sensing facilitates scalable and low-cost measurement.
arXiv Detail & Related papers (2022-03-11T05:26:46Z) - Learning Dynamics via Graph Neural Networks for Human Pose Estimation
and Tracking [98.91894395941766]
We propose a novel online approach to learning the pose dynamics, which are independent of pose detections in current fame.
Specifically, we derive this prediction of dynamics through a graph neural network(GNN) that explicitly accounts for both spatial-temporal and visual information.
Experiments on PoseTrack 2017 and PoseTrack 2018 datasets demonstrate that the proposed method achieves results superior to the state of the art on both human pose estimation and tracking tasks.
arXiv Detail & Related papers (2021-06-07T16:36:50Z) - TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild [77.59069361196404]
TRiPOD is a novel method for predicting body dynamics based on graph attentional networks.
To incorporate a real-world challenge, we learn an indicator representing whether an estimated body joint is visible/invisible at each frame.
Our evaluation shows that TRiPOD outperforms all prior work and state-of-the-art specifically designed for each of the trajectory and pose forecasting tasks.
arXiv Detail & Related papers (2021-04-08T20:01:00Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z) - A Single RGB Camera Based Gait Analysis with a Mobile Tele-Robot for
Healthcare [9.992387025633805]
This work focuses on the analysis of gait, which is widely adopted for joint correction and assessing any lower limb or spinal problem.
On the hardware side, we design a novel marker-less gait analysis device using a low-cost RGB camera mounted on a mobile tele-robot.
arXiv Detail & Related papers (2020-02-11T21:42:22Z) - End-to-End Models for the Analysis of System 1 and System 2 Interactions
based on Eye-Tracking Data [99.00520068425759]
We propose a computational method, within a modified visual version of the well-known Stroop test, for the identification of different tasks and potential conflicts events.
A statistical analysis shows that the selected variables can characterize the variation of attentive load within different scenarios.
We show that Machine Learning techniques allow to distinguish between different tasks with a good classification accuracy.
arXiv Detail & Related papers (2020-02-03T17:46:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.