Predict joint angle of body parts based on sequence pattern recognition
- URL: http://arxiv.org/abs/2405.17369v1
- Date: Mon, 27 May 2024 17:24:11 GMT
- Title: Predict joint angle of body parts based on sequence pattern recognition
- Authors: Amin Ahmadi Kasani, Hedieh Sajedi,
- Abstract summary: ergonomists use ergonomic risk assessments based on visual observation of the workplace.
Some parts of the workers' bodies may not be in the camera's field of view.
It is difficult to predict the position of body parts when they are not visible in the image.
- Score: 5.371337604556311
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The way organs are positioned and moved in the workplace can cause pain and physical harm. Therefore, ergonomists use ergonomic risk assessments based on visual observation of the workplace, or review pictures and videos taken in the workplace. Sometimes the workers in the photos are not in perfect condition. Some parts of the workers' bodies may not be in the camera's field of view, could be obscured by objects, or by self-occlusion, this is the main problem in 2D human posture recognition. It is difficult to predict the position of body parts when they are not visible in the image, and geometric mathematical methods are not entirely suitable for this purpose. Therefore, we created a dataset with artificial images of a 3D human model, specifically for painful postures, and real human photos from different viewpoints. Each image we captured was based on a predefined joint angle for each 3D model or human model. We created various images, including images where some body parts are not visible. Nevertheless, the joint angle is estimated beforehand, so we could study the case by converting the input images into the sequence of joint connections between predefined body parts and extracting the desired joint angle with a convolutional neural network. In the end, we obtained root mean square error (RMSE) of 12.89 and mean absolute error (MAE) of 4.7 on the test dataset.
Related papers
- Implicit 3D Human Mesh Recovery using Consistency with Pose and Shape
from Unseen-view [23.30144908939099]
From an image of a person, we can easily infer the natural 3D pose and shape of the person even if ambiguity exists.
We propose "Implicit 3D Human Mesh Recovery (ImpHMR)" that can implicitly imagine a person in 3D space at the feature-level.
arXiv Detail & Related papers (2023-06-30T13:37:24Z) - Zolly: Zoom Focal Length Correctly for Perspective-Distorted Human Mesh
Reconstruction [66.10717041384625]
Zolly is the first 3DHMR method focusing on perspective-distorted images.
We propose a new camera model and a novel 2D representation, termed distortion image, which describes the 2D dense distortion scale of the human body.
We extend two real-world datasets tailored for this task, all containing perspective-distorted human images.
arXiv Detail & Related papers (2023-03-24T04:22:41Z) - Neural Novel Actor: Learning a Generalized Animatable Neural
Representation for Human Actors [98.24047528960406]
We propose a new method for learning a generalized animatable neural representation from a sparse set of multi-view imagery of multiple persons.
The learned representation can be used to synthesize novel view images of an arbitrary person from a sparse set of cameras, and further animate them with the user's pose control.
arXiv Detail & Related papers (2022-08-25T07:36:46Z) - Learning Visibility for Robust Dense Human Body Estimation [78.37389398573882]
Estimating 3D human pose and shape from 2D images is a crucial yet challenging task.
We learn dense human body estimation that is robust to partial observations.
We obtain pseudo ground-truths of visibility labels from dense UV correspondences and train a neural network to predict visibility along with 3D coordinates.
arXiv Detail & Related papers (2022-08-23T00:01:05Z) - Single-view 3D Body and Cloth Reconstruction under Complex Poses [37.86174829271747]
We extend existing implicit function-based models to deal with images of humans with arbitrary poses and self-occluded limbs.
We learn an implicit function that maps the input image to a 3D body shape with a low level of detail.
We then learn a displacement map, conditioned on the smoothed surface, which encodes the high-frequency details of the clothes and body.
arXiv Detail & Related papers (2022-05-09T07:34:06Z) - COAP: Compositional Articulated Occupancy of People [28.234772596912162]
We present a novel neural implicit representation for articulated human bodies.
We employ a part-aware encoder-decoder architecture to learn neural articulated occupancy.
Our method largely outperforms existing solutions in terms of both efficiency and accuracy.
arXiv Detail & Related papers (2022-04-13T06:02:20Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z) - Self-Supervised 3D Human Pose Estimation via Part Guided Novel Image
Synthesis [72.34794624243281]
We propose a self-supervised learning framework to disentangle variations from unlabeled video frames.
Our differentiable formalization, bridging the representation gap between the 3D pose and spatial part maps, allows us to operate on videos with diverse camera movements.
arXiv Detail & Related papers (2020-04-09T07:55:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.