Semi-Supervised Training to Improve Player and Ball Detection in Soccer
- URL: http://arxiv.org/abs/2204.06859v1
- Date: Thu, 14 Apr 2022 10:20:56 GMT
- Title: Semi-Supervised Training to Improve Player and Ball Detection in Soccer
- Authors: Renaud Vandeghen, Anthony Cioppa, Marc Van Droogenbroeck
- Abstract summary: We present a novel semi-supervised method to train a network based on a labeled image dataset by leveraging a large unlabeled dataset of soccer broadcast videos.
We show that including unlabeled data in the training process allows to substantially improve the performances of the detection network trained only on the labeled data.
- Score: 11.376125584750548
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate player and ball detection has become increasingly important in
recent years for sport analytics. As most state-of-the-art methods rely on
training deep learning networks in a supervised fashion, they require huge
amounts of annotated data, which are rarely available. In this paper, we
present a novel generic semi-supervised method to train a network based on a
labeled image dataset by leveraging a large unlabeled dataset of soccer
broadcast videos. More precisely, we design a teacher-student approach in which
the teacher produces surrogate annotations on the unlabeled data to be used
later for training a student which has the same architecture as the teacher.
Furthermore, we introduce three training loss parametrizations that allow the
student to doubt the predictions of the teacher during training depending on
the proposal confidence score. We show that including unlabeled data in the
training process allows to substantially improve the performances of the
detection network trained only on the labeled data. Finally, we provide a
thorough performance study including different proportions of labeled and
unlabeled data, and establish the first benchmark on the new SoccerNet-v3
detection task, with an mAP of 52.3%. Our code is available at
https://github.com/rvandeghen/SST .
Related papers
- SoccerKDNet: A Knowledge Distillation Framework for Action Recognition
in Soccer Videos [3.1583465114791105]
We propose a novel end-to-end knowledge distillation based transfer learning network pre-trained on the Kinetics400 dataset.
We also introduce a new dataset named SoccerDB1 containing 448 videos and consisting of 4 diverse classes each of players playing soccer.
arXiv Detail & Related papers (2023-07-15T10:43:24Z) - Hierarchical Supervision and Shuffle Data Augmentation for 3D
Semi-Supervised Object Detection [90.32180043449263]
State-of-the-art 3D object detectors are usually trained on large-scale datasets with high-quality 3D annotations.
A natural remedy is to adopt semi-supervised learning (SSL) by leveraging a limited amount of labeled samples and abundant unlabeled samples.
This paper introduces a novel approach of Hierarchical Supervision and Shuffle Data Augmentation (HSSDA), which is a simple yet effective teacher-student framework.
arXiv Detail & Related papers (2023-04-04T02:09:32Z) - Pushing the Envelope for Depth-Based Semi-Supervised 3D Hand Pose
Estimation with Consistency Training [2.6954666679827137]
We propose a semi-supervised method to significantly reduce the dependence on labeled training data.
The proposed method consists of two identical networks trained jointly: a teacher network and a student network.
Experiments demonstrate that the proposed method outperforms the state-of-the-art semi-supervised methods by large margins.
arXiv Detail & Related papers (2023-03-27T12:32:49Z) - Generative Conversational Networks [67.13144697969501]
We propose a framework called Generative Conversational Networks, in which conversational agents learn to generate their own labelled training data.
We show an average improvement of 35% in intent detection and 21% in slot tagging over a baseline model trained from the seed data.
arXiv Detail & Related papers (2021-06-15T23:19:37Z) - SLADE: A Self-Training Framework For Distance Metric Learning [75.54078592084217]
We present a self-training framework, SLADE, to improve retrieval performance by leveraging additional unlabeled data.
We first train a teacher model on the labeled data and use it to generate pseudo labels for the unlabeled data.
We then train a student model on both labels and pseudo labels to generate final feature embeddings.
arXiv Detail & Related papers (2020-11-20T08:26:10Z) - PointContrast: Unsupervised Pre-training for 3D Point Cloud
Understanding [107.02479689909164]
In this work, we aim at facilitating research on 3D representation learning.
We measure the effect of unsupervised pre-training on a large source set of 3D scenes.
arXiv Detail & Related papers (2020-07-21T17:59:22Z) - Uncertainty-aware Self-training for Text Classification with Few Labels [54.13279574908808]
We study self-training as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck.
We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network.
We show our methods leveraging only 20-30 labeled samples per class for each task for training and for validation can perform within 3% of fully supervised pre-trained language models.
arXiv Detail & Related papers (2020-06-27T08:13:58Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.