OPPH: A Vision-Based Operator for Measuring Body Movements for Personal Healthcare
- URL: http://arxiv.org/abs/2408.09409v1
- Date: Sun, 18 Aug 2024 08:52:22 GMT
- Title: OPPH: A Vision-Based Operator for Measuring Body Movements for Personal Healthcare
- Authors: Chen Long-fei, Subramanian Ramamoorthy, Robert B Fisher,
- Abstract summary: Vision-based motion estimation methods show promise in accurately and unobtrusively estimating human body motion for healthcare purposes.
These methods are not specifically designed for healthcare purposes and face challenges in real-world applications.
We propose the OPPH operator to enhance current vision-based motion estimation methods.
- Score: 19.468689776476104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision-based motion estimation methods show promise in accurately and unobtrusively estimating human body motion for healthcare purposes. However, these methods are not specifically designed for healthcare purposes and face challenges in real-world applications. Human pose estimation methods often lack the accuracy needed for detecting fine-grained, subtle body movements, while optical flow-based methods struggle with poor lighting conditions and unseen real-world data. These issues result in human body motion estimation errors, particularly during critical medical situations where the body is motionless, such as during unconsciousness. To address these challenges and improve the accuracy of human body motion estimation for healthcare purposes, we propose the OPPH operator designed to enhance current vision-based motion estimation methods. This operator, which considers human body movement and noise properties, functions as a multi-stage filter. Results tested on two real-world and one synthetic human motion dataset demonstrate that the operator effectively removes real-world noise, significantly enhances the detection of motionless states, maintains the accuracy of estimating active body movements, and maintains long-term body movement trends. This method could be beneficial for analyzing both critical medical events and chronic medical conditions.
Related papers
- COIN: Control-Inpainting Diffusion Prior for Human and Camera Motion Estimation [98.05046790227561]
COIN is a control-inpainting motion diffusion prior that enables fine-grained control to disentangle human and camera motions.
COIN outperforms the state-of-the-art methods in terms of global human motion estimation and camera motion estimation.
arXiv Detail & Related papers (2024-08-29T10:36:29Z) - Aligning Human Motion Generation with Human Perceptions [51.831338643012444]
We propose a data-driven approach to bridge the gap by introducing a large-scale human perceptual evaluation dataset, MotionPercept, and a human motion critic model, MotionCritic.
Our critic model offers a more accurate metric for assessing motion quality and could be readily integrated into the motion generation pipeline.
arXiv Detail & Related papers (2024-07-02T14:01:59Z) - SISMIK for brain MRI: Deep-learning-based motion estimation and model-based motion correction in k-space [0.0]
We propose a retrospective method for motion estimation and correction for 2D Spin-Echo scans of the brain.
The method leverages the power of deep neural networks to estimate motion parameters in k-space.
It uses a model-based approach to restore degraded images to avoid ''hallucinations''
arXiv Detail & Related papers (2023-12-20T17:38:56Z) - Deep state-space modeling for explainable representation, analysis, and
generation of professional human poses [0.0]
This paper introduces three novel methods for creating explainable representations of human movement.
The trained models are used for the full-body dexterity analysis of expert professionals.
arXiv Detail & Related papers (2023-04-13T08:13:10Z) - Task-Oriented Human-Object Interactions Generation with Implicit Neural
Representations [61.659439423703155]
TOHO: Task-Oriented Human-Object Interactions Generation with Implicit Neural Representations.
Our method generates continuous motions that are parameterized only by the temporal coordinate.
This work takes a step further toward general human-scene interaction simulation.
arXiv Detail & Related papers (2023-03-23T09:31:56Z) - Imposing Temporal Consistency on Deep Monocular Body Shape and Pose
Estimation [67.23327074124855]
This paper presents an elegant solution for the integration of temporal constraints in the fitting process.
We derive parameters of a sequence of body models, representing shape and motion of a person, including jaw poses, facial expressions, and finger poses.
Our approach enables the derivation of realistic 3D body models from image sequences, including facial expression and articulated hands.
arXiv Detail & Related papers (2022-02-07T11:11:55Z) - Unsupervised Landmark Detection Based Spatiotemporal Motion Estimation
for 4D Dynamic Medical Images [16.759486905827433]
We provide a novel motion estimation framework of Dense-Sparse-Dense (DSD), which comprises two stages.
In the first stage, we process the raw dense image to extract sparse landmarks to represent the target organ anatomical topology.
In the second stage, we derive the sparse motion displacement from the extracted sparse landmarks of two images of different time points.
arXiv Detail & Related papers (2021-09-30T02:06:02Z) - From Movement Kinematics to Object Properties: Online Recognition of
Human Carefulness [112.28757246103099]
We show how a robot can infer online, from vision alone, whether or not the human partner is careful when moving an object.
We demonstrated that a humanoid robot could perform this inference with high accuracy (up to 81.3%) even with a low-resolution camera.
The prompt recognition of movement carefulness from observing the partner's action will allow robots to adapt their actions on the object to show the same degree of care as their human partners.
arXiv Detail & Related papers (2021-09-01T16:03:13Z) - A Spatio-temporal Attention-based Model for Infant Movement Assessment
from Videos [44.71923220732036]
We develop a new method for fidgety movement assessment using human poses extracted from short clips.
Human poses capture only relevant motion profiles of joints and limbs and are free from irrelevant appearance artifacts.
Our experiments show that the proposed method achieves the ROC-AUC score of 81.87%, significantly outperforming existing competing methods with better interpretability.
arXiv Detail & Related papers (2021-05-20T14:31:54Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.