Event Detection in Football using Graph Convolutional Networks
- URL: http://arxiv.org/abs/2301.10052v1
- Date: Tue, 24 Jan 2023 14:52:54 GMT
- Title: Event Detection in Football using Graph Convolutional Networks
- Authors: Aditya Sangram Singh Rana
- Abstract summary: We show how to model the players and the ball in each frame of the video sequence as a graph.
We present the results for graph convolutional layers and pooling methods that can be used to model the temporal context present around each action.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The massive growth of data collection in sports has opened numerous avenues
for professional teams and media houses to gain insights from this data. The
data collected includes per frame player and ball trajectories, and event
annotations such as passes, fouls, cards, goals, etc. Graph Convolutional
Networks (GCNs) have recently been employed to process this highly unstructured
tracking data which can be otherwise difficult to model because of lack of
clarity on how to order players in a sequence and how to handle missing objects
of interest. In this thesis, we focus on the goal of automatic event detection
from football videos. We show how to model the players and the ball in each
frame of the video sequence as a graph, and present the results for graph
convolutional layers and pooling methods that can be used to model the temporal
context present around each action.
Related papers
- Local-Global Information Interaction Debiasing for Dynamic Scene Graph
Generation [51.92419880088668]
We propose a novel DynSGG model based on multi-task learning, DynSGG-MTL, which introduces the local interaction information and global human-action interaction information.
Long-temporal human actions supervise the model to generate multiple scene graphs that conform to the global constraints and avoid the model being unable to learn the tail predicates.
arXiv Detail & Related papers (2023-08-10T01:24:25Z) - Towards Active Learning for Action Spotting in Association Football
Videos [59.84375958757395]
Analyzing football videos is challenging and requires identifying subtle and diverse-temporal patterns.
Current algorithms face significant challenges when learning from limited annotated data.
We propose an active learning framework that selects the most informative video samples to be annotated next.
arXiv Detail & Related papers (2023-04-09T11:50:41Z) - Infusing Commonsense World Models with Graph Knowledge [89.27044249858332]
We study the setting of generating narratives in an open world text adventure game.
A graph representation of the underlying game state can be used to train models that consume and output both grounded graph representations and natural language descriptions and actions.
arXiv Detail & Related papers (2023-01-13T19:58:27Z) - A Graph-Based Method for Soccer Action Spotting Using Unsupervised
Player Classification [75.93186954061943]
Action spotting involves understanding the dynamics of the game, the complexity of events, and the variation of video sequences.
In this work, we focus on the former by (a) identifying and representing the players, referees, and goalkeepers as nodes in a graph, and by (b) modeling their temporal interactions as sequences of graphs.
For the player identification task, our method obtains an overall performance of 57.83% average-mAP by combining it with other modalities.
arXiv Detail & Related papers (2022-11-22T15:23:53Z) - Graph Neural Networks to Predict Sports Outcomes [0.0]
We introduce a sport-agnostic graph-based representation of game states.
We then use our proposed graph representation as input to graph neural networks to predict sports outcomes.
arXiv Detail & Related papers (2022-07-28T14:45:02Z) - SoccerNet-Tracking: Multiple Object Tracking Dataset and Benchmark in
Soccer Videos [62.686484228479095]
We propose a novel dataset for multiple object tracking composed of 200 sequences of 30s each.
The dataset is fully annotated with bounding boxes and tracklet IDs.
Our analysis shows that multiple player, referee and ball tracking in soccer videos is far from being solved.
arXiv Detail & Related papers (2022-04-14T12:22:12Z) - Automatic event detection in football using tracking data [0.0]
We propose a framework to automatically extract football events using tracking data, namely the coordinates of all players and the ball.
Our approach consists of two models: (1) the possession model evaluates which player was in possession of the ball at each time, as well as the distinct player configurations in the time intervals where the ball is not in play.
arXiv Detail & Related papers (2022-02-01T23:20:40Z) - RMS-Net: Regression and Masking for Soccer Event Spotting [52.742046866220484]
We devise a lightweight and modular network for action spotting, which can simultaneously predict the event label and its temporal offset.
When tested on the SoccerNet dataset and using standard features, our full proposal exceeds the current state of the art by 3 Average-mAP points.
arXiv Detail & Related papers (2021-02-15T16:04:18Z) - Automatic Pass Annotation from Soccer VideoStreams Based on Object
Detection and LSTM [6.87782863484826]
PassNet is a method to recognize the most frequent events in soccer, i.e., passes, from video streams.
Our results show good results and significant improvement in the accuracy of pass detection.
PassNet is the first step towards an automated event annotation system.
arXiv Detail & Related papers (2020-07-13T16:14:41Z) - Event detection in coarsely annotated sports videos via parallel multi
receptive field 1D convolutions [14.30009544149561]
In problems such as sports video analytics, it is difficult to obtain accurate frame level annotations and exact event duration.
We propose the task of event detection in coarsely annotated videos.
We introduce a multi-tower temporal convolutional network architecture for the proposed task.
arXiv Detail & Related papers (2020-04-13T19:51:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.