Dance Style Classification using Laban-Inspired and Frequency-Domain Motion Features
- URL: http://arxiv.org/abs/2511.20469v1
- Date: Tue, 25 Nov 2025 16:33:45 GMT
- Title: Dance Style Classification using Laban-Inspired and Frequency-Domain Motion Features
- Authors: Ben Hamscher, Arnold Brosch, Nicolas Binninger, Maksymilian Jan Dejna, Kira Maag,
- Abstract summary: We present a framework for classifying dance styles based on pose estimates extracted from videos.<n>These features capture local joint dynamics such as velocity, acceleration, and angular movement of the upper body.<n>To further encode rhythmic and periodic aspects of movement, we integrate Fast Fourier Transform features that characterize movement patterns in the frequency domain.
- Score: 0.13048920509133805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dance is an essential component of human culture and serves as a tool for conveying emotions and telling stories. Identifying and distinguishing dance genres based on motion data is a complex problem in human activity recognition, as many styles share similar poses, gestures, and temporal motion patterns. This work presents a lightweight framework for classifying dance styles that determines motion characteristics based on pose estimates extracted from videos. We propose temporal-spatial descriptors inspired by Laban Movement Analysis. These features capture local joint dynamics such as velocity, acceleration, and angular movement of the upper body, enabling a structured representation of spatial coordination. To further encode rhythmic and periodic aspects of movement, we integrate Fast Fourier Transform features that characterize movement patterns in the frequency domain. The proposed approach achieves robust classification of different dance styles with low computational effort, as complex model architectures are not required, and shows that interpretable motion representations can effectively capture stylistic nuances.
Related papers
- MimicParts: Part-aware Style Injection for Speech-Driven 3D Motion Generation [30.215940521087642]
MimicParts is a novel framework designed to enhance stylized motion generation based on part-aware style injection and part-aware denoising network.<n>It divides the body into different regions to encode localized motion styles, enabling the model to capture fine-grained regional differences.<n>Our method outperforms existing methods showcasing naturalness and expressive 3D human motion sequences.
arXiv Detail & Related papers (2025-10-15T06:53:15Z) - PAMD: Plausibility-Aware Motion Diffusion Model for Long Dance Generation [51.2555550979386]
Plausibility-Aware Motion Diffusion (PAMD) is a framework for generating dances that are both musically aligned and physically realistic.<n>To provide more effective guidance during generation, we incorporate Prior Motion Guidance (PMG)<n>Experiments show that PAMD significantly improves musical alignment and enhances the physical plausibility of generated motions.
arXiv Detail & Related papers (2025-05-26T14:44:09Z) - Dance Style Recognition Using Laban Movement Analysis [0.562479170374811]
This study focuses on dance style recognition using features extracted using Laban Movement Analysis.<n>We introduce a novel pipeline which combines 3D pose estimation, 3D human mesh reconstruction, and floor aware body modeling to effectively extract LMA features.<n>Our proposed method achieves a highest classification accuracy of 99.18% which shows that the addition of temporal context significantly improves dance style recognition performance.
arXiv Detail & Related papers (2025-04-29T20:35:01Z) - Align Your Rhythm: Generating Highly Aligned Dance Poses with Gating-Enhanced Rhythm-Aware Feature Representation [22.729568599120846]
We propose Danceba, a novel framework that leverages gating mechanism to enhance rhythm-aware feature representation.<n>Phase-Based Rhythm Extraction (PRE) to precisely extract rhythmic information from musical phase data.<n>Temporal-Gated Causal Attention (TGCA) to focus on global rhythmic features.<n> Parallel Mamba Motion Modeling (PMMM) architecture to separately model upper and lower body motions.
arXiv Detail & Related papers (2025-03-21T17:42:50Z) - Towards Synthesized and Editable Motion In-Betweening Through Part-Wise Phase Representation [29.62788252114547]
styled motion in-betweening is crucial for computer animation and gaming.<n>We propose a novel framework that models motion styles at the body-part level.<n>Our approach enables more nuanced and expressive animations.
arXiv Detail & Related papers (2025-03-11T08:44:27Z) - InterDance:Reactive 3D Dance Generation with Realistic Duet Interactions [67.37790144477503]
We propose InterDance, a large-scale duet dance dataset that significantly enhances motion quality, data scale, and the variety of dance genres.<n>We introduce a diffusion-based framework with an interaction refinement guidance strategy to optimize the realism of interactions progressively.
arXiv Detail & Related papers (2024-12-22T11:53:51Z) - MotionCrafter: One-Shot Motion Customization of Diffusion Models [66.44642854791807]
We introduce MotionCrafter, a one-shot instance-guided motion customization method.
MotionCrafter employs a parallel spatial-temporal architecture that injects the reference motion into the temporal component of the base model.
During training, a frozen base model provides appearance normalization, effectively separating appearance from motion.
arXiv Detail & Related papers (2023-12-08T16:31:04Z) - BRACE: The Breakdancing Competition Dataset for Dance Motion Synthesis [123.73677487809418]
We introduce a new dataset aiming to challenge common assumptions in dance motion synthesis.
We focus on breakdancing which features acrobatic moves and tangled postures.
Our efforts produced the BRACE dataset, which contains over 3 hours and 30 minutes of densely annotated poses.
arXiv Detail & Related papers (2022-07-20T18:03:54Z) - Rhythm is a Dancer: Music-Driven Motion Synthesis with Global Structure [47.09425316677689]
We present a music-driven motion synthesis framework that generates long-term sequences of human motions synchronized with the input beats.
Our framework enables generation of diverse motions that are controlled by the content of the music, and not only by the beat.
arXiv Detail & Related papers (2021-11-23T21:26:31Z) - Dance In the Wild: Monocular Human Animation with Neural Dynamic
Appearance Synthesis [56.550999933048075]
We propose a video based synthesis method that tackles challenges and demonstrates high quality results for in-the-wild videos.
We introduce a novel motion signature that is used to modulate the generator weights to capture dynamic appearance changes.
We evaluate our method on a set of challenging videos and show that our approach achieves state-of-the art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-11-10T20:18:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.