Using joint angles based on the international biomechanical standards for human action recognition and related tasks
- URL: http://arxiv.org/abs/2406.17443v1
- Date: Tue, 25 Jun 2024 10:23:58 GMT
- Title: Using joint angles based on the international biomechanical standards for human action recognition and related tasks
- Authors: Kevin Schlegel, Lei Jiang, Hao Ni,
- Abstract summary: We show how to convert keypoint data into joint angles that uniquely describe a pose.
We experimentally demonstrate that the joint angle representation of keypoint data is suitable for machine learning applications.
The use of joint angles as a human meaningful representation of kinematic data is in particular promising for applications where interpretability and dialog with human experts is important.
- Score: 7.789894769085375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Keypoint data has received a considerable amount of attention in machine learning for tasks like action detection and recognition. However, human experts in movement such as doctors, physiotherapists, sports scientists and coaches use a notion of joint angles standardised by the International Society of Biomechanics to precisely and efficiently communicate static body poses and movements. In this paper, we introduce the basic biomechanical notions and show how they can be used to convert common keypoint data into joint angles that uniquely describe the given pose and have various desirable mathematical properties, such as independence of both the camera viewpoint and the person performing the action. We experimentally demonstrate that the joint angle representation of keypoint data is suitable for machine learning applications and can in some cases bring an immediate performance gain. The use of joint angles as a human meaningful representation of kinematic data is in particular promising for applications where interpretability and dialog with human experts is important, such as many sports and medical applications. To facilitate further research in this direction, we will release a python package to convert keypoint data into joint angles as outlined in this paper.
Related papers
- Visual-Geometric Collaborative Guidance for Affordance Learning [63.038406948791454]
We propose a visual-geometric collaborative guided affordance learning network that incorporates visual and geometric cues.
Our method outperforms the representative models regarding objective metrics and visual quality.
arXiv Detail & Related papers (2024-10-15T07:35:51Z) - GEARS: Local Geometry-aware Hand-object Interaction Synthesis [38.75942505771009]
We introduce a novel joint-centered sensor designed to reason about local object geometry near potential interaction regions.
As an important step towards mitigating the learning complexity, we transform the points from global frame to template hand frame and use a shared module to process sensor features of each individual joint.
This is followed by a perceptual-temporal transformer network aimed at capturing correlation among the joints in different dimensions.
arXiv Detail & Related papers (2024-04-02T09:18:52Z) - Disentangled Interaction Representation for One-Stage Human-Object
Interaction Detection [70.96299509159981]
Human-Object Interaction (HOI) detection is a core task for human-centric image understanding.
Recent one-stage methods adopt a transformer decoder to collect image-wide cues that are useful for interaction prediction.
Traditional two-stage methods benefit significantly from their ability to compose interaction features in a disentangled and explainable manner.
arXiv Detail & Related papers (2023-12-04T08:02:59Z) - How Object Information Improves Skeleton-based Human Action Recognition
in Assembly Tasks [12.349172146831506]
We present a novel approach of integrating object information into skeleton-based action recognition.
We enhance two state-of-the-art methods by treating object centers as further skeleton joints.
Our research sheds light on the benefits of combining skeleton joints with object information for human action recognition in assembly tasks.
arXiv Detail & Related papers (2023-06-09T12:18:14Z) - Multi-Channel Time-Series Person and Soft-Biometric Identification [65.83256210066787]
This work investigates person and soft-biometrics identification from recordings of humans performing different activities using deep architectures.
We evaluate the method on four datasets of multi-channel time-series human activity recognition (HAR)
Soft-biometric based attribute representation shows promising results and emphasis the necessity of larger datasets.
arXiv Detail & Related papers (2023-04-04T07:24:51Z) - Full-Body Articulated Human-Object Interaction [61.01135739641217]
CHAIRS is a large-scale motion-captured f-AHOI dataset consisting of 16.2 hours of versatile interactions.
CHAIRS provides 3D meshes of both humans and articulated objects during the entire interactive process.
By learning the geometrical relationships in HOI, we devise the very first model that leverage human pose estimation.
arXiv Detail & Related papers (2022-12-20T19:50:54Z) - Skeleton-Based Mutually Assisted Interacted Object Localization and
Human Action Recognition [111.87412719773889]
We propose a joint learning framework for "interacted object localization" and "human action recognition" based on skeleton data.
Our method achieves the best or competitive performance with the state-of-the-art methods for human action recognition.
arXiv Detail & Related papers (2021-10-28T10:09:34Z) - TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild [77.59069361196404]
TRiPOD is a novel method for predicting body dynamics based on graph attentional networks.
To incorporate a real-world challenge, we learn an indicator representing whether an estimated body joint is visible/invisible at each frame.
Our evaluation shows that TRiPOD outperforms all prior work and state-of-the-art specifically designed for each of the trajectory and pose forecasting tasks.
arXiv Detail & Related papers (2021-04-08T20:01:00Z) - Human Interaction Recognition Framework based on Interacting Body Part
Attention [24.913372626903648]
We propose a novel framework that simultaneously considers both implicit and explicit representations of human interactions.
The proposed method captures the subtle difference between different interactions using interacting body part attention.
We validate the effectiveness of the proposed method using four widely used public datasets.
arXiv Detail & Related papers (2021-01-22T06:52:42Z) - Simultaneous Learning from Human Pose and Object Cues for Real-Time
Activity Recognition [11.290467061493189]
We propose a novel approach to real-time human activity recognition, through simultaneously learning from observations of both human poses and objects involved in the human activity.
Our method outperforms previous methods and obtains real-time performance for human activity recognition with a processing speed of 104 Hz.
arXiv Detail & Related papers (2020-03-26T22:04:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.