Counterfactual Explanation-Based Badminton Motion Guidance Generation Using Wearable Sensors
- URL: http://arxiv.org/abs/2405.11802v1
- Date: Mon, 20 May 2024 05:48:20 GMT
- Title: Counterfactual Explanation-Based Badminton Motion Guidance Generation Using Wearable Sensors
- Authors: Minwoo Seong, Gwangbin Kim, Yumin Kang, Junhyuk Jang, Joseph DelPreto, SeungJun Kim,
- Abstract summary: This study proposes a framework for enhancing the stroke quality of badminton players by generating personalized motion guides.
These guides are based on counterfactual algorithms and aim to reduce the performance gap between novice and expert players.
Our approach provides joint-level guidance through visualizable data to assist players in improving their movements without requiring expert knowledge.
- Score: 7.439909114662477
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study proposes a framework for enhancing the stroke quality of badminton players by generating personalized motion guides, utilizing a multimodal wearable dataset. These guides are based on counterfactual algorithms and aim to reduce the performance gap between novice and expert players. Our approach provides joint-level guidance through visualizable data to assist players in improving their movements without requiring expert knowledge. The method was evaluated against a traditional algorithm using metrics to assess validity, proximity, and plausibility, including arithmetic measures and motion-specific evaluation metrics. Our evaluation demonstrates that the proposed framework can generate motions that maintain the essence of original movements while enhancing stroke quality, providing closer guidance than direct expert motion replication. The results highlight the potential of our approach for creating personalized sports motion guides by generating counterfactual motion guidance for arbitrary input motion samples of badminton strokes.
Related papers
- MotionRL: Align Text-to-Motion Generation to Human Preferences with Multi-Reward Reinforcement Learning [99.09906827676748]
We introduce MotionRL, the first approach to utilize Multi-Reward Reinforcement Learning (RL) for optimizing text-to-motion generation tasks.
Our novel approach uses reinforcement learning to fine-tune the motion generator based on human preferences prior knowledge of the human perception model.
In addition, MotionRL introduces a novel multi-objective optimization strategy to approximate optimality between text adherence, motion quality, and human preferences.
arXiv Detail & Related papers (2024-10-09T03:27:14Z) - A review on vision-based motion estimation [18.979649159405962]
Compared to contact sensors-based motion measurement, vision-based motion measurement has advantages of low cost and high efficiency.
This paper provides a review on existing motion measurement methods.
To address issue, we developed the Gaussian kernel-based motion measurement method.
arXiv Detail & Related papers (2024-07-19T17:28:49Z) - Aligning Human Motion Generation with Human Perceptions [51.831338643012444]
We propose a data-driven approach to bridge the gap by introducing a large-scale human perceptual evaluation dataset, MotionPercept, and a human motion critic model, MotionCritic.
Our critic model offers a more accurate metric for assessing motion quality and could be readily integrated into the motion generation pipeline.
arXiv Detail & Related papers (2024-07-02T14:01:59Z) - Learning Generalizable Human Motion Generator with Reinforcement Learning [95.62084727984808]
Text-driven human motion generation is one of the vital tasks in computer-aided content creation.
Existing methods often overfit specific motion expressions in the training data, hindering their ability to generalize.
We present textbfInstructMotion, which incorporate the trail and error paradigm in reinforcement learning for generalizable human motion generation.
arXiv Detail & Related papers (2024-05-24T13:29:12Z) - AI coach for badminton [0.0]
This study dissects video footage of badminton matches to extract insights into player kinetics and biomechanics.
The research aims to derive predictive models that can suggest improvements in stance, technique, and muscle orientation.
These recommendations are designed to mitigate erroneous techniques, reduce the risk of joint fatigue, and enhance overall performance.
arXiv Detail & Related papers (2024-03-13T20:51:21Z) - A Real-time Human Pose Estimation Approach for Optimal Sensor Placement
in Sensor-based Human Activity Recognition [63.26015736148707]
This paper introduces a novel methodology to resolve the issue of optimal sensor placement for Human Activity Recognition.
The derived skeleton data provides a unique strategy for identifying the optimal sensor location.
Our findings indicate that the vision-based method for sensor placement offers comparable results to the conventional deep learning approach.
arXiv Detail & Related papers (2023-07-06T10:38:14Z) - Contact-Aware Retargeting of Skinned Motion [49.71236739408685]
This paper introduces a motion estimation method that preserves self-contacts and prevents interpenetration.
The method identifies self-contacts and ground contacts in the input motion, and optimize the motion to apply to the output skeleton.
In experiments, our results quantitatively outperform previous methods and we conduct a user study where our retargeted motions are rated as higher-quality than those produced by recent works.
arXiv Detail & Related papers (2021-09-15T17:05:02Z) - Motion Pyramid Networks for Accurate and Efficient Cardiac Motion
Estimation [51.72616167073565]
We propose Motion Pyramid Networks, a novel deep learning-based approach for accurate and efficient cardiac motion estimation.
We predict and fuse a pyramid of motion fields from multiple scales of feature representations to generate a more refined motion field.
We then use a novel cyclic teacher-student training strategy to make the inference end-to-end and further improve the tracking performance.
arXiv Detail & Related papers (2020-06-28T21:03:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.