Eye Movement Feature Classification for Soccer Goalkeeper Expertise
Identification in Virtual Reality
- URL: http://arxiv.org/abs/2009.11676v2
- Date: Wed, 6 Jan 2021 17:22:41 GMT
- Title: Eye Movement Feature Classification for Soccer Goalkeeper Expertise
Identification in Virtual Reality
- Authors: Benedikt Hosp, Florian Schultz, Oliver H\"oner, Enkelejda Kasneci
- Abstract summary: This study shows promising results for objective classification of goalkeepers expertise based on their gaze behaviour.
It provides valuable insight to inform the design of training systems to enhance perceptual skills of athletes.
- Score: 8.356765961526955
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The latest research in expertise assessment of soccer players has affirmed
the importance of perceptual skills (especially for decision making) by
focusing either on high experimental control or on a realistic presentation. To
assess the perceptual skills of athletes in an optimized manner, we captured
omnidirectional in-field scenes and showed these to 12 expert, 10 intermediate
and 13 novice soccer goalkeepers on virtual reality glasses. All scenes were
shown from the same natural goalkeeper perspective and ended after the return
pass to the goalkeeper. Based on their gaze behavior we classified their
expertise with common machine learning techniques. This pilot study shows
promising results for objective classification of goalkeepers expertise based
on their gaze behaviour and provided valuable insight to inform the design of
training systems to enhance perceptual skills of athletes.
Related papers
- Scope Meets Screen: Lessons Learned in Designing Composite Visualizations for Marksmanship Training Across Skill Levels [3.345437353879255]
We present a shooting visualization system and evaluate its perceived effectiveness for both novice and expert shooters.<n>The insights gained from this design study point to the broader value of integrating first-person video with visual analytics for coaching.
arXiv Detail & Related papers (2025-07-01T00:16:41Z) - ExpertAF: Expert Actionable Feedback from Video [81.46431188306397]
We introduce a novel method to generate actionable feedback from video of a person doing a physical activity.
Our method takes a video demonstration and its accompanying 3D body pose and generates expert commentary.
Our method is able to reason across multi-modal input combinations to output full-spectrum, actionable coaching.
arXiv Detail & Related papers (2024-08-01T16:13:07Z) - Deep Understanding of Soccer Match Videos [20.783415560412003]
Soccer is one of the most popular sport worldwide, with live broadcasts frequently available for major matches.
Our system can detect key objects such as soccer balls, players and referees.
It also tracks the movements of players and the ball, recognizes player numbers, classifies scenes, and identifies highlights such as goal kicks.
arXiv Detail & Related papers (2024-07-11T05:54:13Z) - An efficient machine learning approach for extracting eSports players distinguishing features and classifying their skill levels using symbolic transfer entropy and consensus nested cross validation [0.0]
Sensor data combined with machine learning have already proved effective in classifying eSports players.
We propose an efficient method to find these features and then use them to classify players' skill levels.
The classification results demonstrate a significant improvement by achieving 90.1% accuracy.
arXiv Detail & Related papers (2024-05-08T15:22:12Z) - What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - What do we learn from a large-scale study of pre-trained visual representations in sim and real environments? [48.75469525877328]
We present a large empirical investigation on the use of pre-trained visual representations (PVRs) for training downstream policies that execute real-world tasks.
We can arrive at three insights: 1) the performance trends of PVRs in the simulation are generally indicative of their trends in the real world, 2) the use of PVRs enables a first-of-its-kind result with indoor ImageNav.
arXiv Detail & Related papers (2023-10-03T17:27:10Z) - What drives a goalkeepers' decisions? [0.0]
We develop a model to predict which movements would be most effective for shot-stopping.
We compare it to the real-life behavior of goalkeepers.
We develop a tool to analyse goalkeepers' behavior in real-life soccer games.
arXiv Detail & Related papers (2022-11-01T10:37:44Z) - Is it worth the effort? Understanding and contextualizing physical
metrics in soccer [1.2205797997133396]
This framework gives a deep insight into the link between physical and technical-tactical aspects of soccer.
It allows associating physical performance with value generation thanks to a top-down approach.
arXiv Detail & Related papers (2022-04-05T16:14:40Z) - Vision-Based Manipulators Need to Also See from Their Hands [58.398637422321976]
We study how the choice of visual perspective affects learning and generalization in the context of physical manipulation from raw sensor observations.
We find that a hand-centric (eye-in-hand) perspective affords reduced observability, but it consistently improves training efficiency and out-of-distribution generalization.
arXiv Detail & Related papers (2022-03-15T18:46:18Z) - Masked Visual Pre-training for Motor Control [118.18189211080225]
Self-supervised visual pre-training from real-world images is effective for learning motor control tasks from pixels.
We freeze the visual encoder and train neural network controllers on top with reinforcement learning.
This is the first self-supervised model to exploit real-world images at scale for motor control.
arXiv Detail & Related papers (2022-03-11T18:58:10Z) - Learning from the Pros: Extracting Professional Goalkeeper Technique
from Broadcast Footage [3.4386226615580107]
We train an unsupervised machine learning model using 3D body pose data extracted from broadcast footage to learn professional goalkeeper technique.
Then, an "expected saves" model is developed, from which we can identify the optimal goalkeeper technique in different match contexts.
arXiv Detail & Related papers (2022-02-22T18:17:30Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.