Towards Universal Soccer Video Understanding
- URL: http://arxiv.org/abs/2412.01820v3
- Date: Mon, 24 Mar 2025 14:22:47 GMT
- Title: Towards Universal Soccer Video Understanding
- Authors: Jiayuan Rao, Haoning Wu, Hao Jiang, Ya Zhang, Yanfeng Wang, Weidi Xie,
- Abstract summary: This paper aims to a comprehensive multi-modal framework for soccer understanding.<n>We introduce SoccerReplay-1988, the largest multi-modal soccer dataset to date, featuring videos and detailed annotations from 1, complete matches.<n>We present an advanced soccer-specific visual, MatchVision, which leveragestemporal information across soccer videos and excels in various downstream tasks.
- Score: 58.889409980618396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a globally celebrated sport, soccer has attracted widespread interest from fans all over the world. This paper aims to develop a comprehensive multi-modal framework for soccer video understanding. Specifically, we make the following contributions in this paper: (i) we introduce SoccerReplay-1988, the largest multi-modal soccer dataset to date, featuring videos and detailed annotations from 1,988 complete matches, with an automated annotation pipeline; (ii) we present an advanced soccer-specific visual encoder, MatchVision, which leverages spatiotemporal information across soccer videos and excels in various downstream tasks; (iii) we conduct extensive experiments and ablation studies on event classification, commentary generation, and multi-view foul recognition. MatchVision demonstrates state-of-the-art performance on all of them, substantially outperforming existing models, which highlights the superiority of our proposed data and model. We believe that this work will offer a standard paradigm for sports understanding research.
Related papers
- TimeSoccer: An End-to-End Multimodal Large Language Model for Soccer Commentary Generation [13.835968474349034]
TimeSoccer is the first end-to-end soccer MLLM for Single-anchor Video Captioning (SDVC) in full-match soccer videos.
TimeSoccer jointly predicts timestamps and generates captions in a single pass, enabling global context modeling.
MoFA-Select is a training-free, motion-aware frame compression module that adaptively selects representative frames.
arXiv Detail & Related papers (2025-04-24T08:27:42Z) - SMGDiff: Soccer Motion Generation using diffusion probabilistic models [44.54275548434197]
Soccer is a globally renowned sport with significant applications in video games and VR/AR.
In this paper, we introduce SMGDiff, a novel two-stage framework for generating real-time and user-controllable soccer motions.
Our key idea is to integrate real-time character control with a powerful diffusion-based generative model, ensuring high-quality and diverse output motion.
arXiv Detail & Related papers (2024-11-25T09:25:53Z) - Deep learning for action spotting in association football videos [64.10841325879996]
The SoccerNet initiative organizes yearly challenges, during which participants from all around the world compete to achieve state-of-the-art performances.
This paper traces the history of action spotting in sports, from the creation of the task back in 2018, to the role it plays today in research and the sports industry.
arXiv Detail & Related papers (2024-10-02T07:56:15Z) - Deep Understanding of Soccer Match Videos [20.783415560412003]
Soccer is one of the most popular sport worldwide, with live broadcasts frequently available for major matches.
Our system can detect key objects such as soccer balls, players and referees.
It also tracks the movements of players and the ball, recognizes player numbers, classifies scenes, and identifies highlights such as goal kicks.
arXiv Detail & Related papers (2024-07-11T05:54:13Z) - MatchTime: Towards Automatic Soccer Game Commentary Generation [52.431010585268865]
We consider constructing an automatic soccer game commentary model to improve the audiences' viewing experience.
First, observing the prevalent video-text misalignment in existing datasets, we manually annotate timestamps for 49 matches.
Second, we propose a multi-modal temporal alignment pipeline to automatically correct and filter the existing dataset at scale.
Third, based on our curated dataset, we train an automatic commentary generation model, named MatchVoice.
arXiv Detail & Related papers (2024-06-26T17:57:25Z) - A Survey on Video Action Recognition in Sports: Datasets, Methods and
Applications [60.3327085463545]
We present a survey on video action recognition for sports analytics.
We introduce more than ten types of sports, including team sports, such as football, basketball, volleyball, hockey and individual sports, such as figure skating, gymnastics, table tennis, diving and badminton.
We develop a toolbox using PaddlePaddle, which supports football, basketball, table tennis and figure skating action recognition.
arXiv Detail & Related papers (2022-06-02T13:19:36Z) - A Multi-stage deep architecture for summary generation of soccer videos [11.41978608521222]
We propose a method to generate the summary of a soccer match exploiting both the audio and the event metadata.
The results show that our method can detect the actions of the match, identify which of these actions should belong to the summary and then propose multiple candidate summaries.
arXiv Detail & Related papers (2022-05-02T07:26:35Z) - SoccerNet-Tracking: Multiple Object Tracking Dataset and Benchmark in
Soccer Videos [62.686484228479095]
We propose a novel dataset for multiple object tracking composed of 200 sequences of 30s each.
The dataset is fully annotated with bounding boxes and tracklet IDs.
Our analysis shows that multiple player, referee and ball tracking in soccer videos is far from being solved.
arXiv Detail & Related papers (2022-04-14T12:22:12Z) - Feature Combination Meets Attention: Baidu Soccer Embeddings and
Transformer based Temporal Detection [3.7709686875144337]
We present a two-stage paradigm to detect what and when events happen in soccer broadcast videos.
Specifically, we fine-tune multiple action recognition models on soccer data to extract high-level semantic features.
This approach achieved the state-of-the-art performance in both two tasks, i.e., action spotting and replay grounding, in the SoccerNet-v2 Challenge.
arXiv Detail & Related papers (2021-06-28T08:00:21Z) - Temporally-Aware Feature Pooling for Action Spotting in Soccer
Broadcasts [86.56462654572813]
We focus our analysis on action spotting in soccer broadcast, which consists in temporally localizing the main actions in a soccer game.
We propose a novel feature pooling method based on NetVLAD, dubbed NetVLAD++, that embeds temporally-aware knowledge.
We train and evaluate our methodology on the recent large-scale dataset SoccerNet-v2, reaching 53.4% Average-mAP for action spotting.
arXiv Detail & Related papers (2021-04-14T11:09:03Z) - SoccerNet-v2: A Dataset and Benchmarks for Holistic Understanding of
Broadcast Soccer Videos [71.72665910128975]
SoccerNet-v2 is a novel large-scale corpus of manual annotations for the SoccerNet video dataset.
We release around 300k annotations within SoccerNet's 500 untrimmed broadcast soccer videos.
We extend current tasks in the realm of soccer to include action spotting, camera shot segmentation with boundary detection.
arXiv Detail & Related papers (2020-11-26T16:10:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.