A Graph-Based Method for Soccer Action Spotting Using Unsupervised
Player Classification
- URL: http://arxiv.org/abs/2211.12334v1
- Date: Tue, 22 Nov 2022 15:23:53 GMT
- Title: A Graph-Based Method for Soccer Action Spotting Using Unsupervised
Player Classification
- Authors: Alejandro Cartas and Coloma Ballester and Gloria Haro
- Abstract summary: Action spotting involves understanding the dynamics of the game, the complexity of events, and the variation of video sequences.
In this work, we focus on the former by (a) identifying and representing the players, referees, and goalkeepers as nodes in a graph, and by (b) modeling their temporal interactions as sequences of graphs.
For the player identification task, our method obtains an overall performance of 57.83% average-mAP by combining it with other modalities.
- Score: 75.93186954061943
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Action spotting in soccer videos is the task of identifying the specific time
when a certain key action of the game occurs. Lately, it has received a large
amount of attention and powerful methods have been introduced. Action spotting
involves understanding the dynamics of the game, the complexity of events, and
the variation of video sequences. Most approaches have focused on the latter,
given that their models exploit the global visual features of the sequences. In
this work, we focus on the former by (a) identifying and representing the
players, referees, and goalkeepers as nodes in a graph, and by (b) modeling
their temporal interactions as sequences of graphs. For the player
identification, or player classification task, we obtain an accuracy of 97.72%
in our annotated benchmark. For the action spotting task, our method obtains an
overall performance of 57.83% average-mAP by combining it with other
audiovisual modalities. This performance surpasses similar graph-based methods
and has competitive results with heavy computing methods. Code and data are
available at https://github.com/IPCV/soccer_action_spotting.
Related papers
- Deep learning for action spotting in association football videos [64.10841325879996]
The SoccerNet initiative organizes yearly challenges, during which participants from all around the world compete to achieve state-of-the-art performances.
This paper traces the history of action spotting in sports, from the creation of the task back in 2018, to the role it plays today in research and the sports industry.
arXiv Detail & Related papers (2024-10-02T07:56:15Z) - Towards Active Learning for Action Spotting in Association Football
Videos [59.84375958757395]
Analyzing football videos is challenging and requires identifying subtle and diverse-temporal patterns.
Current algorithms face significant challenges when learning from limited annotated data.
We propose an active learning framework that selects the most informative video samples to be annotated next.
arXiv Detail & Related papers (2023-04-09T11:50:41Z) - Event Detection in Football using Graph Convolutional Networks [0.0]
We show how to model the players and the ball in each frame of the video sequence as a graph.
We present the results for graph convolutional layers and pooling methods that can be used to model the temporal context present around each action.
arXiv Detail & Related papers (2023-01-24T14:52:54Z) - Spotting Temporally Precise, Fine-Grained Events in Video [23.731838969934206]
We introduce the task of spotting temporally precise, fine-grained events in video.
Models must reason globally about the full-time scale of actions and locally to identify subtle frame-to-frame appearance and motion differences.
We propose E2E-Spot, a compact, end-to-end model that performs well on the precise spotting task and can be trained quickly on a single GPU.
arXiv Detail & Related papers (2022-07-20T22:15:07Z) - Temporally-Aware Feature Pooling for Action Spotting in Soccer
Broadcasts [86.56462654572813]
We focus our analysis on action spotting in soccer broadcast, which consists in temporally localizing the main actions in a soccer game.
We propose a novel feature pooling method based on NetVLAD, dubbed NetVLAD++, that embeds temporally-aware knowledge.
We train and evaluate our methodology on the recent large-scale dataset SoccerNet-v2, reaching 53.4% Average-mAP for action spotting.
arXiv Detail & Related papers (2021-04-14T11:09:03Z) - RMS-Net: Regression and Masking for Soccer Event Spotting [52.742046866220484]
We devise a lightweight and modular network for action spotting, which can simultaneously predict the event label and its temporal offset.
When tested on the SoccerNet dataset and using standard features, our full proposal exceeds the current state of the art by 3 Average-mAP points.
arXiv Detail & Related papers (2021-02-15T16:04:18Z) - Improved Soccer Action Spotting using both Audio and Video Streams [3.4376560669160394]
We propose a study on combining audio and video information at different stages of deep neural network architectures.
We used the SoccerNet benchmark dataset, which contains annotated events for 500 soccer game videos from the Big Five European leagues.
We observed an average absolute improvement of the mean Average Precision (mAP) metric of $7.43%$ for the action classification task and of $4.19%$ for the action spotting task.
arXiv Detail & Related papers (2020-11-09T09:12:44Z) - Hybrid Dynamic-static Context-aware Attention Network for Action
Assessment in Long Videos [96.45804577283563]
We present a novel hybrid dynAmic-static Context-aware attenTION NETwork (ACTION-NET) for action assessment in long videos.
We learn the video dynamic information but also focus on the static postures of the detected athletes in specific frames.
We combine the features of the two streams to regress the final video score, supervised by ground-truth scores given by experts.
arXiv Detail & Related papers (2020-08-13T15:51:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.