Wide Open Gazes: Quantifying Visual Exploratory Behavior in Soccer with Pose Enhanced Positional Data
- URL: http://arxiv.org/abs/2602.18519v1
- Date: Thu, 19 Feb 2026 20:17:23 GMT
- Title: Wide Open Gazes: Quantifying Visual Exploratory Behavior in Soccer with Pose Enhanced Positional Data
- Authors: Joris Bekkers,
- Abstract summary: Traditional approaches to measuring visual exploratory behavior in soccer rely on counting visual exploratory actions (VEAs) based on rapid movements exceeding 125/s.<n>This research introduces a formulaic continuous vision layer to quantify players' visual perception from pose-enhanced tracking.<n>We demonstrate that aggregated visual metrics are predictive of controlled pitch value gained at the end of dribbling actions using 32 games of synchronized pose-enhanced tracking data and on-ball event data from the 2024 Copa America.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional approaches to measuring visual exploratory behavior in soccer rely on counting visual exploratory actions (VEAs) based on rapid head movements exceeding 125°/s, but this method suffer from player position bias (i.e., a focus on central midfielders), annotation challenges, binary measurement constraints (i.e., a player is scanning, or not), lack the power to predict relevant short-term in-game future success, and are incompatible with fundamental soccer analytics models such as pitch control. This research introduces a novel formulaic continuous stochastic vision layer to quantify players' visual perception from pose-enhanced spatiotemporal tracking. Our probabilistic field-of-view and occlusion models incorporate head and shoulder rotation angles to create speed-dependent vision maps for individual players in a two-dimensional top-down plane. We combine these vision maps with pitch control and pitch value surfaces to analyze the awaiting phase (when a player is awaiting the ball to arrive after a pass for a teammate) and their subsequent on-ball phase. We demonstrate that aggregated visual metrics - such as the percentage of defended area observed while awaiting a pass - are predictive of controlled pitch value gained at the end of dribbling actions using 32 games of synchronized pose-enhanced tracking data and on-ball event data from the 2024 Copa America. This methodology works regardless of player position, eliminates manual annotation requirements, and provides continuous measurements that seamlessly integrate into existing soccer analytics frameworks. To further support the integration with existing soccer analytics frameworks we open-source the tools required to make these calculations.
Related papers
- SoccerMaster: A Vision Foundation Model for Soccer Understanding [50.88251190999469]
Soccer understanding has recently garnered growing research interest due to its domain-specific complexity and unique challenges.<n>This work aims to propose a unified model to handle diverse soccer visual understanding tasks, ranging from fine-grained perception to semantic reasoning.<n>We present SoccerMaster, the first soccer-specific vision foundation model that unifies diverse understanding tasks within a single framework.
arXiv Detail & Related papers (2025-12-11T18:03:30Z) - SoccerNet 2025 Challenges Results [205.71032061537747]
SoccerNet 2025 Challenges mark the fifth annual edition of the SoccerNet open effort, dedicated to advancing computer vision research in football video understanding.<n>This year's challenges span four vision-based tasks: Team Ball Action Spotting, Monocular Depth Estimation, Multi-View Foul Recognition, and Game State Reconstruction.<n>Report presents the results of each challenge, highlights the top-performing solutions, and provides insights into the progress made by the community.
arXiv Detail & Related papers (2025-08-26T16:37:07Z) - Action Anticipation from SoccerNet Football Video Broadcasts [84.87912817065506]
We introduce the task of action anticipation for football broadcast videos.<n>We predict future actions in unobserved future frames within a five- or ten-second anticipation window.<n>Our work will enable applications in automated broadcasting, tactical analysis, and player decision-making.
arXiv Detail & Related papers (2025-04-16T12:24:33Z) - Game State and Spatio-temporal Action Detection in Soccer using Graph Neural Networks and 3D Convolutional Networks [1.4249472316161877]
Soccer rely on two data sources: the player positions on the pitch and the sequences of events they perform.<n>We propose atemporal action detection approach that combines visual and game state analytics via Graph Neural Networks trained end-to-end with state-of-the-art 3D CNNs.
arXiv Detail & Related papers (2025-02-21T13:41:38Z) - Passing Heatmap Prediction Based on Transformer Model and Tracking Data [0.0]
This research presents a novel deep-learning network architecture which is capable to predict the potential end location of passes.
Once analysed more than 28,000 pass events, a robust prediction can be achieved with more than 0.7 Top-1 accuracy.
And based on the prediction, a better understanding of the pitch control and pass option could be reached to measure players' off-ball movement contribution to defensive performance.
arXiv Detail & Related papers (2023-09-04T11:14:22Z) - Estimation of control area in badminton doubles with pose information
from top and back view drone videos [11.679451300997016]
We present the first annotated drone dataset from top and back views in badminton doubles.
We propose a framework to estimate the control area probability map, which can be used to evaluate teamwork performance.
arXiv Detail & Related papers (2023-05-07T11:18:39Z) - A Graph-Based Method for Soccer Action Spotting Using Unsupervised
Player Classification [75.93186954061943]
Action spotting involves understanding the dynamics of the game, the complexity of events, and the variation of video sequences.
In this work, we focus on the former by (a) identifying and representing the players, referees, and goalkeepers as nodes in a graph, and by (b) modeling their temporal interactions as sequences of graphs.
For the player identification task, our method obtains an overall performance of 57.83% average-mAP by combining it with other modalities.
arXiv Detail & Related papers (2022-11-22T15:23:53Z) - SoccerNet-Tracking: Multiple Object Tracking Dataset and Benchmark in
Soccer Videos [62.686484228479095]
We propose a novel dataset for multiple object tracking composed of 200 sequences of 30s each.
The dataset is fully annotated with bounding boxes and tracklet IDs.
Our analysis shows that multiple player, referee and ball tracking in soccer videos is far from being solved.
arXiv Detail & Related papers (2022-04-14T12:22:12Z) - Learning Dynamics via Graph Neural Networks for Human Pose Estimation
and Tracking [98.91894395941766]
We propose a novel online approach to learning the pose dynamics, which are independent of pose detections in current fame.
Specifically, we derive this prediction of dynamics through a graph neural network(GNN) that explicitly accounts for both spatial-temporal and visual information.
Experiments on PoseTrack 2017 and PoseTrack 2018 datasets demonstrate that the proposed method achieves results superior to the state of the art on both human pose estimation and tracking tasks.
arXiv Detail & Related papers (2021-06-07T16:36:50Z) - Using Player's Body-Orientation to Model Pass Feasibility in Soccer [7.205450793637325]
Given a monocular video of a soccer match, this paper presents a computational model to estimate the most feasible pass at any given time.
The method leverages offensive player's orientation (plus their location) and opponents' spatial configuration to compute the feasibility of pass events within players of the same team.
Results show that, by including orientation as a feasibility measure, a robust computational model can be built, reaching more than 0.7 Top-3 accuracy.
arXiv Detail & Related papers (2020-04-15T17:09:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.