Continuous football player tracking from discrete broadcast data
- URL: http://arxiv.org/abs/2311.14642v1
- Date: Fri, 24 Nov 2023 18:16:28 GMT
- Title: Continuous football player tracking from discrete broadcast data
- Authors: Matthew J. Penn, Christl A. Donnelly, and Samir Bhatt
- Abstract summary: We present a method that can estimate continuous full-pitch tracking data from discrete data made from broadcast footage.
Such data could be collected by clubs or players at a similar cost to event data, which is widely available down to semi-professional level.
- Score: 0.6144680854063939
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Player tracking data remains out of reach for many professional football
teams as their video feeds are not sufficiently high quality for computer
vision technologies to be used. To help bridge this gap, we present a method
that can estimate continuous full-pitch tracking data from discrete data made
from broadcast footage. Such data could be collected by clubs or players at a
similar cost to event data, which is widely available down to semi-professional
level. We test our method using open-source tracking data, and include a
version that can be applied to a large set of over 200 games with such discrete
data.
Related papers
- A Computer Vision Framework for Multi-Class Detection and Tracking in Soccer Broadcast Footage [0.0]
This paper examines whether such data can instead be extracted directly from standard broadcast footage using a single-camera computer vision pipeline.<n>This project develops an end-to-end system that combines a YOLO object detector with the ByteTrack tracking algorithm to identify and track players, referees, goalkeepers, and the ball throughout a match.<n> Experimental results show that the pipeline achieves high performance in detecting and tracking players and officials, with strong precision, recall, and mAP50 scores, while ball detection remains the primary challenge.
arXiv Detail & Related papers (2026-02-17T21:44:09Z) - Temporal Unlearnable Examples: Preventing Personal Video Data from Unauthorized Exploitation by Object Tracking [90.81846867441993]
This paper presents the first investigation on preventing personal video data from unauthorized exploitation by deep trackers.<n>We propose a novel generative framework for generating Temporal Unlearnable Examples (TUEs)<n>Our approach achieves state-of-the-art performance in video data-privacy protection, with strong transferability across VOT models, datasets, and temporal matching tasks.
arXiv Detail & Related papers (2025-07-10T07:11:33Z) - Simulating Tracking Data to Advance Sports Analytics Research [4.811183825795439]
We present a method to collect and utilize simulated soccer tracking data from the Google Research Football environment.
We provide processes that extract high-level features and events from the simulated data.
We address the scarcity of publicly available tracking data, providing support for research at the intersection of artificial intelligence and sports analytics.
arXiv Detail & Related papers (2025-03-25T16:18:23Z) - Video Individual Counting for Moving Drones [51.429771128144964]
Video Individual Counting (VIC) has received increasing attentions recently due to its importance in intelligent video surveillance.
Previous crowd counting datasets are captured with fixed or rarely moving cameras with relatively sparse individuals.
We propose a density map based VIC method based on a MovingDroneCrowd dataset.
arXiv Detail & Related papers (2025-03-12T07:09:33Z) - BlinkTrack: Feature Tracking over 100 FPS via Events and Images [50.98675227695814]
We propose a novel framework, BlinkTrack, which integrates event data with RGB images for high-frequency feature tracking.
Our method extends the traditional Kalman filter into a learning-based framework, utilizing differentiable Kalman filters in both event and image branches.
Experimental results indicate that BlinkTrack significantly outperforms existing event-based methods.
arXiv Detail & Related papers (2024-09-26T15:54:18Z) - AIM 2024 Challenge on Video Saliency Prediction: Methods and Results [105.09572982350532]
This paper reviews the Challenge on Video Saliency Prediction at AIM 2024.
The goal of the participants was to develop a method for predicting accurate saliency maps for the provided set of video sequences.
arXiv Detail & Related papers (2024-09-23T08:59:22Z) - Event Detection in Football using Graph Convolutional Networks [0.0]
We show how to model the players and the ball in each frame of the video sequence as a graph.
We present the results for graph convolutional layers and pooling methods that can be used to model the temporal context present around each action.
arXiv Detail & Related papers (2023-01-24T14:52:54Z) - Large Scale Real-World Multi-Person Tracking [68.27438015329807]
This paper presents a new large scale multi-person tracking dataset -- textttPersonPath22.
It is over an order of magnitude larger than currently available high quality multi-object tracking datasets such as MOT17, HiEve, and MOT20.
arXiv Detail & Related papers (2022-11-03T23:03:13Z) - SoccerNet-Tracking: Multiple Object Tracking Dataset and Benchmark in
Soccer Videos [62.686484228479095]
We propose a novel dataset for multiple object tracking composed of 200 sequences of 30s each.
The dataset is fully annotated with bounding boxes and tracklet IDs.
Our analysis shows that multiple player, referee and ball tracking in soccer videos is far from being solved.
arXiv Detail & Related papers (2022-04-14T12:22:12Z) - Optical tracking in team sports [0.0]
We provide a basic understanding for quantitative data analysts about the process of creating the input data.
We discuss the preprocessing steps of tracking, the most common challenges in this domain, and the application of tracking data to sports teams.
arXiv Detail & Related papers (2022-04-08T15:51:35Z) - MOTSynth: How Can Synthetic Data Help Pedestrian Detection and Tracking? [36.094861549144426]
Deep learning methods for video pedestrian detection and tracking require large volumes of training data to achieve good performance.
We generate MOT Synth, a large, highly diverse synthetic dataset for object detection and tracking using a rendering game engine.
Our experiments show that MOT Synth can be used as a replacement for real data on tasks such as pedestrian detection, re-identification, segmentation, and tracking.
arXiv Detail & Related papers (2021-08-21T14:25:25Z) - SoccerNet-v2: A Dataset and Benchmarks for Holistic Understanding of
Broadcast Soccer Videos [71.72665910128975]
SoccerNet-v2 is a novel large-scale corpus of manual annotations for the SoccerNet video dataset.
We release around 300k annotations within SoccerNet's 500 untrimmed broadcast soccer videos.
We extend current tasks in the realm of soccer to include action spotting, camera shot segmentation with boundary detection.
arXiv Detail & Related papers (2020-11-26T16:10:16Z) - TAO: A Large-Scale Benchmark for Tracking Any Object [95.87310116010185]
Tracking Any Object dataset consists of 2,907 high resolution videos, captured in diverse environments, which are half a minute long on average.
We ask annotators to label objects that move at any point in the video, and give names to them post factum.
Our vocabulary is both significantly larger and qualitatively different from existing tracking datasets.
arXiv Detail & Related papers (2020-05-20T21:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.