A New Action Recognition Framework for Video Highlights Summarization in
Sporting Events
- URL: http://arxiv.org/abs/2012.00253v1
- Date: Tue, 1 Dec 2020 04:14:40 GMT
- Title: A New Action Recognition Framework for Video Highlights Summarization in
Sporting Events
- Authors: Cheng Yan, Xin Li, Guoqiang Li
- Abstract summary: We present a framework to automatically clip the sports video stream by using a three-level prediction algorithm based on two classical open-source structures, i.e., YOLO-v3 and OpenPose.
It is found that by using a modest amount of sports video training data, our methodology can perform sports activity highlights clipping accurately.
- Score: 9.870478438166288
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To date, machine learning for human action recognition in video has been
widely implemented in sports activities. Although some studies have been
successful in the past, precision is still the most significant concern. In
this study, we present a high-accuracy framework to automatically clip the
sports video stream by using a three-level prediction algorithm based on two
classical open-source structures, i.e., YOLO-v3 and OpenPose. It is found that
by using a modest amount of sports video training data, our methodology can
perform sports activity highlights clipping accurately. Comparing with the
previous systems, our methodology shows some advantages in accuracy. This study
may serve as a new clipping system to extend the potential applications of the
video summarization in sports field, as well as facilitates the development of
match analysis system.
Related papers
- Deep learning for action spotting in association football videos [64.10841325879996]
The SoccerNet initiative organizes yearly challenges, during which participants from all around the world compete to achieve state-of-the-art performances.
This paper traces the history of action spotting in sports, from the creation of the task back in 2018, to the role it plays today in research and the sports industry.
arXiv Detail & Related papers (2024-10-02T07:56:15Z) - OSL-ActionSpotting: A Unified Library for Action Spotting in Sports Videos [56.393522913188704]
We introduce OSL-ActionSpotting, a Python library that unifies different action spotting algorithms to streamline research and applications in sports video analytics.
We successfully integrated three cornerstone action spotting methods into OSL-ActionSpotting, achieving performance metrics that match those of the original, disparates.
arXiv Detail & Related papers (2024-07-01T13:17:37Z) - Benchmarking Badminton Action Recognition with a New Fine-Grained Dataset [16.407837909069073]
We introduce the VideoBadminton dataset derived from high-quality badminton footage.
The introduction of VideoBadminton could not only serve for badminton action recognition but also provide a dataset for recognizing fine-grained actions.
arXiv Detail & Related papers (2024-03-19T02:52:06Z) - Building an Open-Vocabulary Video CLIP Model with Better Architectures,
Optimization and Data [102.0069667710562]
This paper presents Open-VCLIP++, a framework that adapts CLIP to a strong zero-shot video classifier.
We demonstrate that training Open-VCLIP++ is tantamount to continual learning with zero historical data.
Our approach is evaluated on three widely used action recognition datasets.
arXiv Detail & Related papers (2023-10-08T04:46:43Z) - Towards Active Learning for Action Spotting in Association Football
Videos [59.84375958757395]
Analyzing football videos is challenging and requires identifying subtle and diverse-temporal patterns.
Current algorithms face significant challenges when learning from limited annotated data.
We propose an active learning framework that selects the most informative video samples to be annotated next.
arXiv Detail & Related papers (2023-04-09T11:50:41Z) - Sports Video Analysis on Large-Scale Data [10.24207108909385]
This paper investigates the modeling of automated machine description on sports video.
We propose a novel large-scale NBA dataset for Sports Video Analysis (NSVA) with a focus on captioning.
arXiv Detail & Related papers (2022-08-09T16:59:24Z) - A Survey on Video Action Recognition in Sports: Datasets, Methods and
Applications [60.3327085463545]
We present a survey on video action recognition for sports analytics.
We introduce more than ten types of sports, including team sports, such as football, basketball, volleyball, hockey and individual sports, such as figure skating, gymnastics, table tennis, diving and badminton.
We develop a toolbox using PaddlePaddle, which supports football, basketball, table tennis and figure skating action recognition.
arXiv Detail & Related papers (2022-06-02T13:19:36Z) - Sports Video: Fine-Grained Action Detection and Classification of Table
Tennis Strokes from Videos for MediaEval 2021 [0.0]
This task tackles fine-grained action detection and classification from videos.
The focus is on recordings of table tennis games.
This work aims at creating tools for sports coaches and players in order to analyze sports performance.
arXiv Detail & Related papers (2021-12-16T10:17:59Z) - Hybrid Dynamic-static Context-aware Attention Network for Action
Assessment in Long Videos [96.45804577283563]
We present a novel hybrid dynAmic-static Context-aware attenTION NETwork (ACTION-NET) for action assessment in long videos.
We learn the video dynamic information but also focus on the static postures of the detected athletes in specific frames.
We combine the features of the two streams to regress the final video score, supervised by ground-truth scores given by experts.
arXiv Detail & Related papers (2020-08-13T15:51:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.