Deep state-space modeling for explainable representation, analysis, and
generation of professional human poses
- URL: http://arxiv.org/abs/2304.14502v2
- Date: Wed, 24 May 2023 09:39:48 GMT
- Title: Deep state-space modeling for explainable representation, analysis, and
generation of professional human poses
- Authors: Brenda Elizabeth Olivas-Padilla, Alina Glushkova, and Sotiris
Manitsaris
- Abstract summary: This paper introduces three novel methods for creating explainable representations of human movement.
The trained models are used for the full-body dexterity analysis of expert professionals.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The analysis of human movements has been extensively studied due to its wide
variety of practical applications, such as human-robot interaction, human
learning applications, or clinical diagnosis. Nevertheless, the
state-of-the-art still faces scientific challenges when modeling human
movements. To begin, new models must account for the stochasticity of human
movement and the physical structure of the human body in order to accurately
predict the evolution of full-body motion descriptors over time. Second, while
utilizing deep learning algorithms, their explainability in terms of body
posture predictions needs to be improved as they lack comprehensible
representations of human movement. This paper addresses these challenges by
introducing three novel methods for creating explainable representations of
human movement. In this study, human body movement is formulated as a
state-space model adhering to the structure of the Gesture Operational Model
(GOM), whose parameters are estimated through the application of deep learning
and statistical algorithms. The trained models are used for the full-body
dexterity analysis of expert professionals, in which dynamic associations
between body joints are identified, and for generating artificially
professional movements.
Related papers
- Persistent-Transient Duality: A Multi-mechanism Approach for Modeling
Human-Object Interaction [58.67761673662716]
Humans are highly adaptable, swiftly switching between different modes to handle different tasks, situations and contexts.
In Human-object interaction (HOI) activities, these modes can be attributed to two mechanisms: (1) the large-scale consistent plan for the whole activity and (2) the small-scale children interactive actions that start and end along the timeline.
This work proposes to model two concurrent mechanisms that jointly control human motion.
arXiv Detail & Related papers (2023-07-24T12:21:33Z) - Motion Capture Benchmark of Real Industrial Tasks and Traditional Crafts
for Human Movement Analysis [0.0]
This paper presents seven datasets recorded using inertial-based motion capture.
The datasets contain professional gestures carried out by industrial operators and skilled craftsmen performed in real conditions in-situ.
arXiv Detail & Related papers (2023-04-03T10:29:24Z) - Adversarial Attention for Human Motion Synthesis [3.9378507882929563]
We present a novel method for controllable human motion synthesis by applying attention-based probabilistic deep adversarial models with end-to-end training.
We show that we can generate synthetic human motion over both short- and long-time horizons through the use of adversarial attention.
arXiv Detail & Related papers (2022-04-25T16:12:42Z) - H4D: Human 4D Modeling by Learning Neural Compositional Representation [75.34798886466311]
This work presents a novel framework that can effectively learn a compact and compositional representation for dynamic human.
A simple yet effective linear motion model is proposed to provide a rough and regularized motion estimation.
Experiments demonstrate our method is not only efficacy in recovering dynamic human with accurate motion and detailed geometry, but also amenable to various 4D human related tasks.
arXiv Detail & Related papers (2022-03-02T17:10:49Z) - Imposing Temporal Consistency on Deep Monocular Body Shape and Pose
Estimation [67.23327074124855]
This paper presents an elegant solution for the integration of temporal constraints in the fitting process.
We derive parameters of a sequence of body models, representing shape and motion of a person, including jaw poses, facial expressions, and finger poses.
Our approach enables the derivation of realistic 3D body models from image sequences, including facial expression and articulated hands.
arXiv Detail & Related papers (2022-02-07T11:11:55Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - Learning Local Recurrent Models for Human Mesh Recovery [50.85467243778406]
We present a new method for video mesh recovery that divides the human mesh into several local parts following the standard skeletal model.
We then model the dynamics of each local part with separate recurrent models, with each model conditioned appropriately based on the known kinematic structure of the human body.
This results in a structure-informed local recurrent learning architecture that can be trained in an end-to-end fashion with available annotations.
arXiv Detail & Related papers (2021-07-27T14:30:33Z) - Improving Human Motion Prediction Through Continual Learning [2.720960618356385]
Human motion prediction is an essential component for enabling closer human-robot collaboration.
It is compounded by the variability of human motion, both at a skeletal level due to the varying size of humans and at a motion level due to individual movement idiosyncrasies.
We propose a modular sequence learning approach that allows end-to-end training while also having the flexibility of being fine-tuned.
arXiv Detail & Related papers (2021-07-01T15:34:41Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - Modelling Human Kinetics and Kinematics during Walking using
Reinforcement Learning [0.0]
We develop an automated method to generate 3D human walking motion in simulation which is comparable to real-world human motion.
We show that the method generalizes well across human-subjects with different kinematic structure and gait-characteristics.
arXiv Detail & Related papers (2021-03-15T04:01:20Z) - How Do We Move: Modeling Human Movement with System Dynamics [34.13127840909941]
We learn the human movement with Generative Adversarial Imitation Learning.
We are the first to learn to model the state transition of moving agents with system dynamics.
arXiv Detail & Related papers (2020-03-01T23:43:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.