Personalized Motion Guidance Framework for Athlete-Centric Coaching
- URL: http://arxiv.org/abs/2510.10496v1
- Date: Sun, 12 Oct 2025 08:21:19 GMT
- Title: Personalized Motion Guidance Framework for Athlete-Centric Coaching
- Authors: Ryota Takamidoa, Chiharu Suzukia, Hiroki Nakamoto,
- Abstract summary: This study developed a Personalized Motion Guidance Framework (PMGF) to enhance athletic performance.<n>PMGF generates individualized motion-refinement guides using generative artificial intelligence techniques.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: A critical challenge in contemporary sports science lies in filling the gap between group-level insights derived from controlled hypothesis-driven experiments and the real-world need for personalized coaching tailored to individual athletes' unique movement patterns. This study developed a Personalized Motion Guidance Framework (PMGF) to enhance athletic performance by generating individualized motion-refinement guides using generative artificial intelligence techniques. PMGF leverages a vertical autoencoder to encode motion sequences into athlete-specific latent representations, which can then be directly manipulated to generate meaningful guidance motions. Two manipulation strategies were explored: (1) smooth interpolation between the learner's motion and a target (e.g., expert) motion to facilitate observational learning, and (2) shifting the motion pattern in an optimal direction in the latent space using a local optimization technique. The results of the validation experiment with data from 51 baseball pitchers revealed that (1) PMGF successfully generated smooth transitions in motion patterns between individuals across all 1,275 pitcher pairs, and (2) the features significantly altered through PMGF manipulations reflected known performance-enhancing characteristics, such as increased stride length and knee extension associated with higher ball velocity, indicating that PMGF induces biomechanically plausible improvements. We propose a future extension called general-PMGF to enhance the applicability of this framework. This extension incorporates bodily, environmental, and task constraints into the generation process, aiming to provide more realistic and versatile guidance across diverse sports contexts.
Related papers
- A Machine Learning-Based Multimodal Framework for Wearable Sensor-Based Archery Action Recognition and Stress Estimation [21.9818193435855]
Motion analysis systems are often expensive and intrusive, limiting their use in natural training environments.<n>We propose a machine learning-based framework that integrates wearable sensor data for simultaneous action recognition and stress estimation.
arXiv Detail & Related papers (2025-11-18T02:16:33Z) - Learning golf swing signatures from a single wrist-worn inertial sensor [0.0]
We build a data-driven framework for personalized golf swing analysis from a single wrist-worn sensor.<n>We learn a compositional, discrete vocabulary of motion primitives that facilitates the detection and visualization of technical flaws.<n>Our system accurately estimates full-body kinematics and swing events from wrist data, delivering lab-grade motion analysis on-course.
arXiv Detail & Related papers (2025-06-20T22:57:59Z) - GENMO: A GENeralist Model for Human MOtion [64.16188966024542]
We present GENMO, a unified Generalist Model for Human Motion that bridges motion estimation and generation in a single framework.<n>Our key insight is to reformulate motion estimation as constrained motion generation, where the output motion must precisely satisfy observed conditioning signals.<n>Our novel architecture handles variable-length motions and mixed multimodal conditions (text, audio, video) at different time intervals, offering flexible control.
arXiv Detail & Related papers (2025-05-02T17:59:55Z) - Spatial-Temporal Graph Diffusion Policy with Kinematic Modeling for Bimanual Robotic Manipulation [88.83749146867665]
Existing approaches learn a policy to predict a distant next-best end-effector pose.<n>They then compute the corresponding joint rotation angles for motion using inverse kinematics.<n>We propose Kinematics enhanced Spatial-TemporAl gRaph diffuser.
arXiv Detail & Related papers (2025-03-13T17:48:35Z) - Biomechanics-Guided Residual Approach to Generalizable Human Motion Generation and Estimation [21.750804738752105]
We propose BioVAE, a biomechanics-aware framework with three core innovations.<n>We show that BioVAE achieves state-of-the-art performance on multiple benchmarks.
arXiv Detail & Related papers (2025-03-08T10:22:36Z) - MotionGPT-2: A General-Purpose Motion-Language Model for Motion Generation and Understanding [76.30210465222218]
MotionGPT-2 is a unified Large Motion-Language Model (LMLMLM)
It supports multimodal control conditions through pre-trained Large Language Models (LLMs)
It is highly adaptable to the challenging 3D holistic motion generation task.
arXiv Detail & Related papers (2024-10-29T05:25:34Z) - Counterfactual Explanation-Based Badminton Motion Guidance Generation Using Wearable Sensors [7.439909114662477]
This study proposes a framework for enhancing the stroke quality of badminton players by generating personalized motion guides.
These guides are based on counterfactual algorithms and aim to reduce the performance gap between novice and expert players.
Our approach provides joint-level guidance through visualizable data to assist players in improving their movements without requiring expert knowledge.
arXiv Detail & Related papers (2024-05-20T05:48:20Z) - Spectral Motion Alignment for Video Motion Transfer using Diffusion Models [54.32923808964701]
Spectral Motion Alignment (SMA) is a framework that refines and aligns motion vectors using Fourier and wavelet transforms.<n> SMA learns motion patterns by incorporating frequency-domain regularization, facilitating the learning of whole-frame global motion dynamics.<n>Extensive experiments demonstrate SMA's efficacy in improving motion transfer while maintaining computational efficiency and compatibility across various video customization frameworks.
arXiv Detail & Related papers (2024-03-22T14:47:18Z) - PhysFormer++: Facial Video-based Physiological Measurement with SlowFast
Temporal Difference Transformer [76.40106756572644]
Recent deep learning approaches focus on mining subtle clues using convolutional neural networks with limited-temporal receptive fields.
In this paper, we propose two end-to-end video transformer based on PhysFormer and Phys++++, to adaptively aggregate both local and global features for r representation enhancement.
Comprehensive experiments are performed on four benchmark datasets to show our superior performance on both intra-temporal and cross-dataset testing.
arXiv Detail & Related papers (2023-02-07T15:56:03Z) - AMP: Adversarial Motion Priors for Stylized Physics-Based Character
Control [145.61135774698002]
We propose a fully automated approach to selecting motion for a character to track in a given scenario.
High-level task objectives that the character should perform can be specified by relatively simple reward functions.
Low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips.
Our system produces high-quality motions comparable to those achieved by state-of-the-art tracking-based techniques.
arXiv Detail & Related papers (2021-04-05T22:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.