Dance Style Recognition Using Laban Movement Analysis
- URL: http://arxiv.org/abs/2504.21166v1
- Date: Tue, 29 Apr 2025 20:35:01 GMT
- Title: Dance Style Recognition Using Laban Movement Analysis
- Authors: Muhammad Turab, Philippe Colantoni, Damien Muselet, Alain Tremeau,
- Abstract summary: This study focuses on dance style recognition using features extracted using Laban Movement Analysis.<n>We introduce a novel pipeline which combines 3D pose estimation, 3D human mesh reconstruction, and floor aware body modeling to effectively extract LMA features.<n>Our proposed method achieves a highest classification accuracy of 99.18% which shows that the addition of temporal context significantly improves dance style recognition performance.
- Score: 0.562479170374811
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The growing interest in automated movement analysis has presented new challenges in recognition of complex human activities including dance. This study focuses on dance style recognition using features extracted using Laban Movement Analysis. Previous studies for dance style recognition often focus on cross-frame movement analysis, which limits the ability to capture temporal context and dynamic transitions between movements. This gap highlights the need for a method that can add temporal context to LMA features. For this, we introduce a novel pipeline which combines 3D pose estimation, 3D human mesh reconstruction, and floor aware body modeling to effectively extract LMA features. To address the temporal limitation, we propose a sliding window approach that captures movement evolution across time in features. These features are then used to train various machine learning methods for classification, and their explainability explainable AI methods to evaluate the contribution of each feature to classification performance. Our proposed method achieves a highest classification accuracy of 99.18\% which shows that the addition of temporal context significantly improves dance style recognition performance.
Related papers
- Dance Style Classification using Laban-Inspired and Frequency-Domain Motion Features [0.13048920509133805]
We present a framework for classifying dance styles based on pose estimates extracted from videos.<n>These features capture local joint dynamics such as velocity, acceleration, and angular movement of the upper body.<n>To further encode rhythmic and periodic aspects of movement, we integrate Fast Fourier Transform features that characterize movement patterns in the frequency domain.
arXiv Detail & Related papers (2025-11-25T16:33:45Z) - Emotion Recognition in Contemporary Dance Performances Using Laban Movement Analysis [0.562479170374811]
Our approach extracts expressive characteristics from 3D keypoints data of professional dancers performing contemporary dance under various emotional states.<n>We train multiple classifiers, including Random Forests and Support Vector Machines.<n>Overall, our study improves emotion recognition in contemporary dance and offers promising applications in performance analysis, dance training, and human--computer interaction, with a highest accuracy of 96.85%.
arXiv Detail & Related papers (2025-04-29T20:17:27Z) - Spatio-Temporal Branching for Motion Prediction using Motion Increments [55.68088298632865]
Human motion prediction (HMP) has emerged as a popular research topic due to its diverse applications.
Traditional methods rely on hand-crafted features and machine learning techniques.
We propose a noveltemporal-temporal branching network using incremental information for HMP.
arXiv Detail & Related papers (2023-08-02T12:04:28Z) - Learning Scene Flow With Skeleton Guidance For 3D Action Recognition [1.5954459915735735]
This work demonstrates the use of 3D flow sequence by a deeptemporal model for 3D action recognition.
An extended deep skeleton is also introduced to learn the most discriminant action motion dynamics.
A late fusion scheme is adopted between the two models for learning the high level cross-modal correlations.
arXiv Detail & Related papers (2023-06-23T04:14:25Z) - Gait Recognition in the Wild with Multi-hop Temporal Switch [81.35245014397759]
gait recognition in the wild is a more practical problem that has attracted the attention of the community of multimedia and computer vision.
This paper presents a novel multi-hop temporal switch method to achieve effective temporal modeling of gait patterns in real-world scenes.
arXiv Detail & Related papers (2022-09-01T10:46:09Z) - Motion Gait: Gait Recognition via Motion Excitation [5.559482051571756]
We propose Motion Excitation Module (MEM) to guide-temporal features to focus on human parts with large dynamic changes.
MEM learns the difference information between frames and intervals, so as to obtain the representation of changes temporal motion changes.
We present the Fine Feature Extractor (EFF), which independently learns according to the spatial-temporal representations of human horizontal parts of individuals.
arXiv Detail & Related papers (2022-06-22T13:47:14Z) - Efficient Modelling Across Time of Human Actions and Interactions [92.39082696657874]
We argue that current fixed-sized-temporal kernels in 3 convolutional neural networks (CNNDs) can be improved to better deal with temporal variations in the input.
We study how we can better handle between classes of actions, by enhancing their feature differences over different layers of the architecture.
The proposed approaches are evaluated on several benchmark action recognition datasets and show competitive results.
arXiv Detail & Related papers (2021-10-05T15:39:11Z) - Multi-level Motion Attention for Human Motion Prediction [132.29963836262394]
We study the use of different types of attention, computed at joint, body part, and full pose levels.
Our experiments on Human3.6M, AMASS and 3DPW validate the benefits of our approach for both periodical and non-periodical actions.
arXiv Detail & Related papers (2021-06-17T08:08:11Z) - Multi-Temporal Convolutions for Human Action Recognition in Videos [83.43682368129072]
We present a novel temporal-temporal convolution block that is capable of extracting at multiple resolutions.
The proposed blocks are lightweight and can be integrated into any 3D-CNN architecture.
arXiv Detail & Related papers (2020-11-08T10:40:26Z) - Learn to cycle: Time-consistent feature discovery for action recognition [83.43682368129072]
Generalizing over temporal variations is a prerequisite for effective action recognition in videos.
We introduce Squeeze Re Temporal Gates (SRTG), an approach that favors temporal activations with potential variations.
We show consistent improvement when using SRTPG blocks, with only a minimal increase in the number of GFLOs.
arXiv Detail & Related papers (2020-06-15T09:36:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.