Talking Tennis: Language Feedback from 3D Biomechanical Action Recognition
- URL: http://arxiv.org/abs/2510.03921v1
- Date: Sat, 04 Oct 2025 19:55:30 GMT
- Title: Talking Tennis: Language Feedback from 3D Biomechanical Action Recognition
- Authors: Arushi Dashore, Aryan Anumala, Emily Hui, Olivia Yang,
- Abstract summary: This research project develops a novel framework that extracts key biomechanical features from motion data.<n>These features are analyzed for relationships influencing stroke effectiveness and injury risk, forming the basis for feedback generation.<n>The experimental setup evaluates this framework on classification performance and interpretability, bridging the gap between explainable AI and sports biomechanics.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated tennis stroke analysis has advanced significantly with the integration of biomechanical motion cues alongside deep learning techniques, enhancing stroke classification accuracy and player performance evaluation. Despite these advancements, existing systems often fail to connect biomechanical insights with actionable language feedback that is both accessible and meaningful to players and coaches. This research project addresses this gap by developing a novel framework that extracts key biomechanical features (such as joint angles, limb velocities, and kinetic chain patterns) from motion data using Convolutional Neural Network Long Short-Term Memory (CNN-LSTM)-based models. These features are analyzed for relationships influencing stroke effectiveness and injury risk, forming the basis for feedback generation using large language models (LLMs). Leveraging the THETIS dataset and feature extraction techniques, our approach aims to produce feedback that is technically accurate, biomechanically grounded, and actionable for end-users. The experimental setup evaluates this framework on classification performance and interpretability, bridging the gap between explainable AI and sports biomechanics.
Related papers
- Dynamic Scoring with Enhanced Semantics for Training-Free Human-Object Interaction Detection [51.52749744031413]
Human-Object Interaction (HOI) detection aims to identify humans and objects within images and interpret their interactions.<n>Existing HOI methods rely heavily on large datasets with manual annotations to learn interactions from visual cues.<n>We propose a novel training-free HOI detection framework for Dynamic Scoring with enhanced semantics.
arXiv Detail & Related papers (2025-07-23T12:30:19Z) - Boosting Automatic Exercise Evaluation Through Musculoskeletal Simulation-Based IMU Data Augmentation [0.0]
We present a novel data augmentation method that generates realistic IMU data using musculoskeletal simulations integrated with systematic modifications of movement trajectories.<n>Our approach ensures biomechanical plausibility and allows for automatic, reliable labeling by combining inverse parameters with a knowledge-based evaluation strategy.<n>Our findings underline the practicality and efficacy of this augmentation method in overcoming common challenges faced by deep learning applications in physiotherapeutic exercise evaluation.
arXiv Detail & Related papers (2025-05-30T09:53:37Z) - KinTwin: Imitation Learning with Torque and Muscle Driven Biomechanical Models Enables Precise Replication of Able-Bodied and Impaired Movement from Markerless Motion Capture [2.44755919161855]
High-quality movement analysis could greatly benefit movement science and rehabilitation.<n>We show the potential for using imitation learning to enable high-quality movement analysis in clinical practice.
arXiv Detail & Related papers (2025-05-19T17:58:03Z) - Knowledge-Based Deep Learning for Time-Efficient Inverse Dynamics [5.78355428732981]
We propose a knowledge-based deep learning framework for time-efficient inverse dynamic analysis.<n>The BiGRU neural network is selected as the backbone of our model due to its proficient handling of time-series data.<n>The experimental results have shown that the selected BiGRU architecture outperforms other neural network models when trained using our specifically designed loss function.
arXiv Detail & Related papers (2024-12-06T20:12:52Z) - MS-MANO: Enabling Hand Pose Tracking with Biomechanical Constraints [50.61346764110482]
We integrate a musculoskeletal system with a learnable parametric hand model, MANO, to create MS-MANO.
This model emulates the dynamics of muscles and tendons to drive the skeletal system, imposing physiologically realistic constraints on the resulting torque trajectories.
We also propose a simulation-in-the-loop pose refinement framework, BioPR, that refines the initial estimated pose through a multi-layer perceptron network.
arXiv Detail & Related papers (2024-04-16T02:18:18Z) - Leveraging Digital Perceptual Technologies for Remote Perception and Analysis of Human Biomechanical Processes: A Contactless Approach for Workload and Joint Force Assessment [4.96669107440958]
This study presents an innovative computer vision framework designed to analyze human movements in industrial settings.
The framework allows for comprehensive scrutiny of human motion, providing valuable insights into kinematic patterns and kinetic data.
arXiv Detail & Related papers (2024-04-02T02:12:00Z) - A Physics-Informed Low-Shot Learning For sEMG-Based Estimation of Muscle
Force and Joint Kinematics [4.878073267556235]
Muscle force and joint kinematics estimation from surface electromyography (sEMG) are essential for real-time biomechanical analysis.
Recent advances in deep neural networks (DNNs) have shown the potential to improve biomechanical analysis in a fully automated and reproducible manner.
This paper presents a novel physics-informed low-shot learning method for sEMG-based estimation of muscle force and joint kinematics.
arXiv Detail & Related papers (2023-07-08T23:01:12Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - Measuring and modeling the motor system with machine learning [117.44028458220427]
The utility of machine learning in understanding the motor system is promising a revolution in how to collect, measure, and analyze data.
We discuss the growing use of machine learning: from pose estimation, kinematic analyses, dimensionality reduction, and closed-loop feedback, to its use in understanding neural correlates and untangling sensorimotor systems.
arXiv Detail & Related papers (2021-03-22T12:42:16Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.