Ballroom Dance Movement Recognition Using a Smart Watch
- URL: http://arxiv.org/abs/2008.10122v2
- Date: Fri, 4 Sep 2020 05:25:56 GMT
- Title: Ballroom Dance Movement Recognition Using a Smart Watch
- Authors: Varun Badrinath Krishna
- Abstract summary: We present a whole body movement detection study using a single smart watch in the context of ballroom dancing.
Deep learning representations are used to classify well-defined sequences of movements.
The classification accuracy of 85.95% was improved to 92.31% by modeling a dance as a first-order Markov chain of figures.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inertial Measurement Unit (IMU) sensors are being increasingly used to detect
human gestures and movements. Using a single IMU sensor, whole body movement
recognition remains a hard problem because movements may not be adequately
captured by the sensor. In this paper, we present a whole body movement
detection study using a single smart watch in the context of ballroom dancing.
Deep learning representations are used to classify well-defined sequences of
movements, called \emph{figures}. Those representations are found to outperform
ensembles of random forests and hidden Markov models. The classification
accuracy of 85.95\% was improved to 92.31\% by modeling a dance as a
first-order Markov chain of figures and correcting estimates of the immediately
preceding figure.
Related papers
- Skeleton2Stage: Reward-Guided Fine-Tuning for Physically Plausible Dance Generation [49.50118203284611]
Motions that look plausible as joint trajectories often exhibit body self-penetration and Foot-Ground Contact (FGC) anomalies when visualized with a human body mesh.<n>We address this skeleton-to-mesh gap by deriving physics-based rewards from the body mesh.<n>Our method can significantly improve the physical plausibility of generated motions, yielding more realistic and aesthetically pleasing dances.
arXiv Detail & Related papers (2026-02-14T13:48:13Z) - A Comparative Study of EMG- and IMU-based Gesture Recognition at the Wrist and Forearm [3.990794855710089]
IMU signals contain sufficient information to serve as the sole input sensor for static gesture recognition.<n> tendon-induced micro-movement captured by IMUs is a major contributor to static gesture recognition.
arXiv Detail & Related papers (2025-12-08T19:36:10Z) - AfroBeats Dance Movement Analysis Using Computer Vision: A Proof-of-Concept Framework Combining YOLO and Segment Anything Model [0.0]
We propose a proof-of-concept framework that integrates YOLOv8 and v11 for dancer detection with the Segment Anything Model (SAM) for precise segmentation.<n>Our approach identifies dancers within video frames, counts discrete dance steps, calculates spatial coverage patterns, and measures rhythm consistency across performance sequences.
arXiv Detail & Related papers (2025-12-03T07:06:06Z) - BaroPoser: Real-time Human Motion Tracking from IMUs and Barometers in Everyday Devices [12.374794959250828]
We present BaroPoser, the first method that combines IMU and barometric data recorded by a smartphone and a smartwatch to estimate human pose and global translation in real time.<n>By leveraging barometric readings, we estimate sensor height changes, which provide valuable cues for both improving the accuracy of human pose estimation and predicting global translation on non-flat terrain.
arXiv Detail & Related papers (2025-08-05T10:46:59Z) - Dance Style Recognition Using Laban Movement Analysis [0.562479170374811]
This study focuses on dance style recognition using features extracted using Laban Movement Analysis.
We introduce a novel pipeline which combines 3D pose estimation, 3D human mesh reconstruction, and floor aware body modeling to effectively extract LMA features.
Our proposed method achieves a highest classification accuracy of 99.18% which shows that the addition of temporal context significantly improves dance style recognition performance.
arXiv Detail & Related papers (2025-04-29T20:35:01Z) - Emotion Recognition in Contemporary Dance Performances Using Laban Movement Analysis [0.562479170374811]
Our approach extracts expressive characteristics from 3D keypoints data of professional dancers performing contemporary dance under various emotional states.
We train multiple classifiers, including Random Forests and Support Vector Machines.
Overall, our study improves emotion recognition in contemporary dance and offers promising applications in performance analysis, dance training, and human--computer interaction, with a highest accuracy of 96.85%.
arXiv Detail & Related papers (2025-04-29T20:17:27Z) - H-MoRe: Learning Human-centric Motion Representation for Action Analysis [27.966744383799835]
H-MoRe is a novel pipeline for learning precise human-centric motion representation.
Unlike previous methods, H-MoRe learns directly from real-world scenarios in a self-supervised manner.
H-MoRe offers refined insights into human motion, which can be integrated seamlessly into action-related applications.
arXiv Detail & Related papers (2025-04-14T19:56:52Z) - Align Your Rhythm: Generating Highly Aligned Dance Poses with Gating-Enhanced Rhythm-Aware Feature Representation [22.729568599120846]
We propose Danceba, a novel framework that leverages gating mechanism to enhance rhythm-aware feature representation.
Phase-Based Rhythm Extraction (PRE) to precisely extract rhythmic information from musical phase data.
Temporal-Gated Causal Attention (TGCA) to focus on global rhythmic features.
Parallel Mamba Motion Modeling (PMMM) architecture to separately model upper and lower body motions.
arXiv Detail & Related papers (2025-03-21T17:42:50Z) - No Identity, no problem: Motion through detection for people tracking [48.708733485434394]
We propose exploiting motion clues while providing supervision only for the detections.
Our algorithm predicts detection heatmaps at two different times, along with a 2D motion estimate between the two images.
We show that our approach delivers state-of-the-art results for single- and multi-view multi-target tracking on the MOT17 and WILDTRACK datasets.
arXiv Detail & Related papers (2024-11-25T15:13:17Z) - Flow Snapshot Neurons in Action: Deep Neural Networks Generalize to Biological Motion Perception [6.359236783105098]
Biological motion perception (BMP) refers to humans' ability to perceive and recognize the actions of living beings solely from their motion patterns.
We propose the Motion Perceiver (MP), which relies on patch-level optical flows from video clips as inputs.
MP outperforms all existing AI models with a maximum improvement of 29% in top-1 action recognition accuracy.
arXiv Detail & Related papers (2024-05-26T09:11:46Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Utilizing Task-Generic Motion Prior to Recover Full-Body Motion from
Very Sparse Signals [3.8079353598215757]
We propose a method that utilizes information from a neural motion prior to improve the accuracy of reconstructed user's motions.
This is based on the premise that the ultimate goal of pose reconstruction is to reconstruct the motion, which is a series of poses.
arXiv Detail & Related papers (2023-08-30T08:21:52Z) - Priority-Centric Human Motion Generation in Discrete Latent Space [59.401128190423535]
We introduce a Priority-Centric Motion Discrete Diffusion Model (M2DM) for text-to-motion generation.
M2DM incorporates a global self-attention mechanism and a regularization term to counteract code collapse.
We also present a motion discrete diffusion model that employs an innovative noise schedule, determined by the significance of each motion token.
arXiv Detail & Related papers (2023-08-28T10:40:16Z) - BRACE: The Breakdancing Competition Dataset for Dance Motion Synthesis [123.73677487809418]
We introduce a new dataset aiming to challenge common assumptions in dance motion synthesis.
We focus on breakdancing which features acrobatic moves and tangled postures.
Our efforts produced the BRACE dataset, which contains over 3 hours and 30 minutes of densely annotated poses.
arXiv Detail & Related papers (2022-07-20T18:03:54Z) - Motion Gait: Gait Recognition via Motion Excitation [5.559482051571756]
We propose Motion Excitation Module (MEM) to guide-temporal features to focus on human parts with large dynamic changes.
MEM learns the difference information between frames and intervals, so as to obtain the representation of changes temporal motion changes.
We present the Fine Feature Extractor (EFF), which independently learns according to the spatial-temporal representations of human horizontal parts of individuals.
arXiv Detail & Related papers (2022-06-22T13:47:14Z) - Transformer Inertial Poser: Attention-based Real-time Human Motion
Reconstruction from Sparse IMUs [79.72586714047199]
We propose an attention-based deep learning method to reconstruct full-body motion from six IMU sensors in real-time.
Our method achieves new state-of-the-art results both quantitatively and qualitatively, while being simple to implement and smaller in size.
arXiv Detail & Related papers (2022-03-29T16:24:52Z) - Unsupervised 3D Pose Estimation for Hierarchical Dance Video Recognition [13.289339907084424]
We propose a Hierarchical Dance Video Recognition framework (HDVR)
HDVR estimates 2D pose sequences, tracks dancers, and then simultaneously estimates corresponding 3D poses and 3D-to-2D imaging parameters.
From the estimated 3D pose sequence, HDVR extracts body part movements, and therefrom dance genre.
arXiv Detail & Related papers (2021-09-19T16:59:37Z) - Learning to dance: A graph convolutional adversarial network to generate
realistic dance motions from audio [7.612064511889756]
Learning to move naturally from music, i.e., to dance, is one of the more complex motions humans often perform effortlessly.
In this paper, we design a novel method based on graph convolutional networks to tackle the problem of automatic dance generation from audio information.
Our method uses an adversarial learning scheme conditioned on the input music audios to create natural motions preserving the key movements of different music styles.
arXiv Detail & Related papers (2020-11-25T19:53:53Z) - Fusing Motion Patterns and Key Visual Information for Semantic Event
Recognition in Basketball Videos [87.29451470527353]
We propose a scheme to fuse global and local motion patterns (MPs) and key visual information (KVI) for semantic event recognition in basketball videos.
An algorithm is proposed to estimate the global motions from the mixed motions based on the intrinsic property of camera adjustments.
A two-stream 3D CNN framework is utilized for group activity recognition over the separated global and local motion patterns.
arXiv Detail & Related papers (2020-07-13T10:15:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.