Emotion Recognition in Contemporary Dance Performances Using Laban Movement Analysis
- URL: http://arxiv.org/abs/2504.21154v1
- Date: Tue, 29 Apr 2025 20:17:27 GMT
- Title: Emotion Recognition in Contemporary Dance Performances Using Laban Movement Analysis
- Authors: Muhammad Turab, Philippe Colantoni, Damien Muselet, Alain Tremeau,
- Abstract summary: Our approach extracts expressive characteristics from 3D keypoints data of professional dancers performing contemporary dance under various emotional states.<n>We train multiple classifiers, including Random Forests and Support Vector Machines.<n>Overall, our study improves emotion recognition in contemporary dance and offers promising applications in performance analysis, dance training, and human--computer interaction, with a highest accuracy of 96.85%.
- Score: 0.562479170374811
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a novel framework for emotion recognition in contemporary dance by improving existing Laban Movement Analysis (LMA) feature descriptors and introducing robust, novel descriptors that capture both quantitative and qualitative aspects of the movement. Our approach extracts expressive characteristics from 3D keypoints data of professional dancers performing contemporary dance under various emotional states, and trains multiple classifiers, including Random Forests and Support Vector Machines. Additionally, we provide in-depth explanation of features and their impact on model predictions using explainable machine learning methods. Overall, our study improves emotion recognition in contemporary dance and offers promising applications in performance analysis, dance training, and human--computer interaction, with a highest accuracy of 96.85\%.
Related papers
- AfroBeats Dance Movement Analysis Using Computer Vision: A Proof-of-Concept Framework Combining YOLO and Segment Anything Model [0.0]
We propose a proof-of-concept framework that integrates YOLOv8 and v11 for dancer detection with the Segment Anything Model (SAM) for precise segmentation.<n>Our approach identifies dancers within video frames, counts discrete dance steps, calculates spatial coverage patterns, and measures rhythm consistency across performance sequences.
arXiv Detail & Related papers (2025-12-03T07:06:06Z) - Dance Style Classification using Laban-Inspired and Frequency-Domain Motion Features [0.13048920509133805]
We present a framework for classifying dance styles based on pose estimates extracted from videos.<n>These features capture local joint dynamics such as velocity, acceleration, and angular movement of the upper body.<n>To further encode rhythmic and periodic aspects of movement, we integrate Fast Fourier Transform features that characterize movement patterns in the frequency domain.
arXiv Detail & Related papers (2025-11-25T16:33:45Z) - Reimagining Dance: Real-time Music Co-creation between Dancers and AI [5.708964539699851]
We present a system that enables dancers to dynamically shape musical environments through their movements.<n>Our multi-modal architecture creates a coherent musical composition by intelligently combining pre-recorded musical clips in response to dance movements.
arXiv Detail & Related papers (2025-06-13T17:56:53Z) - Dance Style Recognition Using Laban Movement Analysis [0.562479170374811]
This study focuses on dance style recognition using features extracted using Laban Movement Analysis.<n>We introduce a novel pipeline which combines 3D pose estimation, 3D human mesh reconstruction, and floor aware body modeling to effectively extract LMA features.<n>Our proposed method achieves a highest classification accuracy of 99.18% which shows that the addition of temporal context significantly improves dance style recognition performance.
arXiv Detail & Related papers (2025-04-29T20:35:01Z) - Align Your Rhythm: Generating Highly Aligned Dance Poses with Gating-Enhanced Rhythm-Aware Feature Representation [22.729568599120846]
We propose Danceba, a novel framework that leverages gating mechanism to enhance rhythm-aware feature representation.<n>Phase-Based Rhythm Extraction (PRE) to precisely extract rhythmic information from musical phase data.<n>Temporal-Gated Causal Attention (TGCA) to focus on global rhythmic features.<n> Parallel Mamba Motion Modeling (PMMM) architecture to separately model upper and lower body motions.
arXiv Detail & Related papers (2025-03-21T17:42:50Z) - Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment [87.20240797625648]
We introduce a novel task within the field of 3D dance generation, termed dance accompaniment.
It requires the generation of responsive movements from a dance partner, the "follower", synchronized with the lead dancer's movements and the underlying musical rhythm.
We propose a GPT-based model, Duolando, which autoregressively predicts the subsequent tokenized motion conditioned on the coordinated information of the music, the leader's and the follower's movements.
arXiv Detail & Related papers (2024-03-27T17:57:02Z) - Component attention network for multimodal dance improvisation
recognition [4.706373333495905]
This paper explores the application and performance of multimodal fusion methods for human motion recognition in the context of dance improvisation.
We propose an attention-based model, component attention network (CANet), for multimodal fusion on three levels: 1) feature fusion with CANet, 2) model fusion with CANet and graph convolutional network (GCN), and 3) late fusion with a voting strategy.
arXiv Detail & Related papers (2023-08-24T15:04:30Z) - TM2D: Bimodality Driven 3D Dance Generation via Music-Text Integration [75.37311932218773]
We propose a novel task for generating 3D dance movements that simultaneously incorporate both text and music modalities.
Our approach can generate realistic and coherent dance movements conditioned on both text and music while maintaining comparable performance with the two single modalities.
arXiv Detail & Related papers (2023-04-05T12:58:33Z) - EDGE: Editable Dance Generation From Music [15.658163494375533]
We introduce Editable Dance GEneration (EDGE), a state-of-the-art method for editable dance generation.
It is capable of creating realistic, physically-plausible dances while remaining faithful to the input music.
arXiv Detail & Related papers (2022-11-19T10:41:38Z) - KP-RNN: A Deep Learning Pipeline for Human Motion Prediction and
Synthesis of Performance Art [0.0]
We offer a new approach for predicting human motion, KP-RNN, a neural network which can integrate easily with existing image processing and generation pipelines.
We utilize a new human motion dataset of performance art, Take The Lead, as well as the motion generation pipeline, the Everybody Dance Now system, to demonstrate the effectiveness of KP-RNN's motion predictions.
arXiv Detail & Related papers (2022-10-09T22:46:55Z) - BRACE: The Breakdancing Competition Dataset for Dance Motion Synthesis [123.73677487809418]
We introduce a new dataset aiming to challenge common assumptions in dance motion synthesis.
We focus on breakdancing which features acrobatic moves and tangled postures.
Our efforts produced the BRACE dataset, which contains over 3 hours and 30 minutes of densely annotated poses.
arXiv Detail & Related papers (2022-07-20T18:03:54Z) - Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic
Memory [92.81383016482813]
We propose a novel music-to-dance framework, Bailando, for driving 3D characters to dance following a piece of music.
We introduce an actor-critic Generative Pre-trained Transformer (GPT) that composes units to a fluent dance coherent to the music.
Our proposed framework achieves state-of-the-art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-03-24T13:06:43Z) - Music-to-Dance Generation with Optimal Transport [48.92483627635586]
We propose a Music-to-Dance with Optimal Transport Network (MDOT-Net) for learning to generate 3D dance choreographs from music.
We introduce an optimal transport distance for evaluating the authenticity of the generated dance distribution and a Gromov-Wasserstein distance to measure the correspondence between the dance distribution and the input music.
arXiv Detail & Related papers (2021-12-03T09:37:26Z) - Learning to dance: A graph convolutional adversarial network to generate
realistic dance motions from audio [7.612064511889756]
Learning to move naturally from music, i.e., to dance, is one of the more complex motions humans often perform effortlessly.
In this paper, we design a novel method based on graph convolutional networks to tackle the problem of automatic dance generation from audio information.
Our method uses an adversarial learning scheme conditioned on the input music audios to create natural motions preserving the key movements of different music styles.
arXiv Detail & Related papers (2020-11-25T19:53:53Z) - Learning to Generate Diverse Dance Motions with Transformer [67.43270523386185]
We introduce a complete system for dance motion synthesis.
A massive dance motion data set is created from YouTube videos.
A novel two-stream motion transformer generative model can generate motion sequences with high flexibility.
arXiv Detail & Related papers (2020-08-18T22:29:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.