Towards Active Learning for Action Spotting in Association Football
Videos
- URL: http://arxiv.org/abs/2304.04220v1
- Date: Sun, 9 Apr 2023 11:50:41 GMT
- Title: Towards Active Learning for Action Spotting in Association Football
Videos
- Authors: Silvio Giancola, Anthony Cioppa, Julia Georgieva, Johsan Billingham,
Andreas Serner, Kerry Peek, Bernard Ghanem, Marc Van Droogenbroeck
- Abstract summary: Analyzing football videos is challenging and requires identifying subtle and diverse-temporal patterns.
Current algorithms face significant challenges when learning from limited annotated data.
We propose an active learning framework that selects the most informative video samples to be annotated next.
- Score: 59.84375958757395
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Association football is a complex and dynamic sport, with numerous actions
occurring simultaneously in each game. Analyzing football videos is challenging
and requires identifying subtle and diverse spatio-temporal patterns. Despite
recent advances in computer vision, current algorithms still face significant
challenges when learning from limited annotated data, lowering their
performance in detecting these patterns. In this paper, we propose an active
learning framework that selects the most informative video samples to be
annotated next, thus drastically reducing the annotation effort and
accelerating the training of action spotting models to reach the highest
accuracy at a faster pace. Our approach leverages the notion of uncertainty
sampling to select the most challenging video clips to train on next, hastening
the learning process of the algorithm. We demonstrate that our proposed active
learning framework effectively reduces the required training data for accurate
action spotting in football videos. We achieve similar performances for action
spotting with NetVLAD++ on SoccerNet-v2, using only one-third of the dataset,
indicating significant capabilities for reducing annotation time and improving
data efficiency. We further validate our approach on two new datasets that
focus on temporally localizing actions of headers and passes, proving its
effectiveness across different action semantics in football. We believe our
active learning framework for action spotting would support further
applications of action spotting algorithms and accelerate annotation campaigns
in the sports domain.
Related papers
- Deep learning for action spotting in association football videos [64.10841325879996]
The SoccerNet initiative organizes yearly challenges, during which participants from all around the world compete to achieve state-of-the-art performances.
This paper traces the history of action spotting in sports, from the creation of the task back in 2018, to the role it plays today in research and the sports industry.
arXiv Detail & Related papers (2024-10-02T07:56:15Z) - Semi-supervised Active Learning for Video Action Detection [8.110693267550346]
We develop a novel semi-supervised active learning approach which utilizes both labeled as well as unlabeled data.
We evaluate the proposed approach on three different benchmark datasets, UCF-24-101, JHMDB-21, and Youtube-VOS.
arXiv Detail & Related papers (2023-12-12T11:13:17Z) - Spotting Temporally Precise, Fine-Grained Events in Video [23.731838969934206]
We introduce the task of spotting temporally precise, fine-grained events in video.
Models must reason globally about the full-time scale of actions and locally to identify subtle frame-to-frame appearance and motion differences.
We propose E2E-Spot, a compact, end-to-end model that performs well on the precise spotting task and can be trained quickly on a single GPU.
arXiv Detail & Related papers (2022-07-20T22:15:07Z) - Self-Regulated Learning for Egocentric Video Activity Anticipation [147.9783215348252]
Self-Regulated Learning (SRL) aims to regulate the intermediate representation consecutively to produce representation that emphasizes the novel information in the frame of the current time-stamp.
SRL sharply outperforms existing state-of-the-art in most cases on two egocentric video datasets and two third-person video datasets.
arXiv Detail & Related papers (2021-11-23T03:29:18Z) - RSPNet: Relative Speed Perception for Unsupervised Video Representation
Learning [100.76672109782815]
We study unsupervised video representation learning that seeks to learn both motion and appearance features from unlabeled video only.
It is difficult to construct a suitable self-supervised task to well model both motion and appearance features.
We propose a new way to perceive the playback speed and exploit the relative speed between two video clips as labels.
arXiv Detail & Related papers (2020-10-27T16:42:50Z) - Hybrid Dynamic-static Context-aware Attention Network for Action
Assessment in Long Videos [96.45804577283563]
We present a novel hybrid dynAmic-static Context-aware attenTION NETwork (ACTION-NET) for action assessment in long videos.
We learn the video dynamic information but also focus on the static postures of the detected athletes in specific frames.
We combine the features of the two streams to regress the final video score, supervised by ground-truth scores given by experts.
arXiv Detail & Related papers (2020-08-13T15:51:42Z) - Event detection in coarsely annotated sports videos via parallel multi
receptive field 1D convolutions [14.30009544149561]
In problems such as sports video analytics, it is difficult to obtain accurate frame level annotations and exact event duration.
We propose the task of event detection in coarsely annotated videos.
We introduce a multi-tower temporal convolutional network architecture for the proposed task.
arXiv Detail & Related papers (2020-04-13T19:51:25Z) - ZSTAD: Zero-Shot Temporal Activity Detection [107.63759089583382]
We propose a novel task setting called zero-shot temporal activity detection (ZSTAD), where activities that have never been seen in training can still be detected.
We design an end-to-end deep network based on R-C3D as the architecture for this solution.
Experiments on both the THUMOS14 and the Charades datasets show promising performance in terms of detecting unseen activities.
arXiv Detail & Related papers (2020-03-12T02:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.