Towards cumulative race time regression in sports: I3D ConvNet transfer
learning in ultra-distance running events
- URL: http://arxiv.org/abs/2208.11191v1
- Date: Tue, 23 Aug 2022 20:53:01 GMT
- Title: Towards cumulative race time regression in sports: I3D ConvNet transfer
learning in ultra-distance running events
- Authors: David Freire-Obreg\'on, Javier Lorenzo-Navarro, Oliverio J. Santana,
Daniel Hern\'andez-Sosa, Modesto Castrill\'on-Santana
- Abstract summary: We propose regressing an ultra-distance runner cumulative race time (CRT) by using only a few seconds of footage as input.
We show that the resulting neural network can provide a remarkable performance for short input footage.
- Score: 1.4859458229776121
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predicting an athlete's performance based on short footage is highly
challenging. Performance prediction requires high domain knowledge and enough
evidence to infer an appropriate quality assessment. Sports pundits can often
infer this kind of information in real-time. In this paper, we propose
regressing an ultra-distance runner cumulative race time (CRT), i.e., the time
the runner has been in action since the race start, by using only a few seconds
of footage as input. We modified the I3D ConvNet backbone slightly and trained
a newly added regressor for that purpose. We use appropriate pre-processing of
the visual input to enable transfer learning from a specific runner. We show
that the resulting neural network can provide a remarkable performance for
short input footage: 18 minutes and a half mean absolute error in estimating
the CRT for runners who have been in action from 8 to 20 hours. Our methodology
has several favorable properties: it does not require a human expert to provide
any insight, it can be used at any moment during the race by just observing a
runner, and it can inform the race staff about a runner at any given time.
Related papers
- Refining Pre-Trained Motion Models [56.18044168821188]
We take on the challenge of improving state-of-the-art supervised models with self-supervised training.
We focus on obtaining a "clean" training signal from real-world unlabelled video.
We show that our method yields reliable gains over fully-supervised methods in real videos.
arXiv Detail & Related papers (2024-01-01T18:59:33Z) - Towards Efficient Record and Replay: A Case Study in WeChat [24.659458527088773]
We introduce WeReplay, a lightweight image-based approach that dynamically adjusts inter-event time based on the GUI rendering state.
Our evaluation shows that our model achieves 92.1% precision and 93.3% recall in discerning GUI rendering states in the WeChat app.
arXiv Detail & Related papers (2023-08-13T01:02:00Z) - An X3D Neural Network Analysis for Runner's Performance Assessment in a
Wild Sporting Environment [1.4859458229776121]
We present a transfer learning analysis on a sporting environment of the expanded 3D (X3D) neural networks.
Inspired by action quality assessment methods in the literature, our method uses an action recognition network to estimate athletes' cumulative race time.
X3D achieves state-of-the-art performance while requiring almost seven times less memory to achieve better precision than previous work.
arXiv Detail & Related papers (2023-07-22T23:15:47Z) - CLIP-ReIdent: Contrastive Training for Player Re-Identification [0.0]
We investigate whether it is possible to transfer the out-standing zero-shot performance of pre-trained CLIP models to the domain of player re-identification.
Unlike previous work, our approach is entirely class-agnostic and benefits from large-scale pre-training.
arXiv Detail & Related papers (2023-03-21T13:55:27Z) - Learning Neural Volumetric Representations of Dynamic Humans in Minutes [49.10057060558854]
We propose a novel method for learning neural volumetric videos of dynamic humans from sparse view videos in minutes with competitive visual quality.
Specifically, we define a novel part-based voxelized human representation to better distribute the representational power of the network to different human parts.
Experiments demonstrate that our model can be learned 100 times faster than prior per-scene optimization methods.
arXiv Detail & Related papers (2023-02-23T18:57:01Z) - SoccerNet-Tracking: Multiple Object Tracking Dataset and Benchmark in
Soccer Videos [62.686484228479095]
We propose a novel dataset for multiple object tracking composed of 200 sequences of 30s each.
The dataset is fully annotated with bounding boxes and tracklet IDs.
Our analysis shows that multiple player, referee and ball tracking in soccer videos is far from being solved.
arXiv Detail & Related papers (2022-04-14T12:22:12Z) - Decontextualized I3D ConvNet for ultra-distance runners performance
analysis at a glance [1.9573154231003194]
In May 2021, the site runnersworld.com published that participation in ultra-distance races has increased by 1,676% in the last 23 years.
Nearly 41% of those runners participate in more than one race per year.
This work aims to determine how the runners performance can be quantified and predicted by considering a non-invasive technique focusing on the ultra-running scenario.
arXiv Detail & Related papers (2022-03-13T20:11:10Z) - Optimization Planning for 3D ConvNets [123.43419144051703]
It is not trivial to optimally learn a 3D Convolutional Neural Networks (3D ConvNets) due to high complexity and various options of the training scheme.
We decompose the path into a series of training "states" and specify the hyper- parameters, e.g., learning rate and the length of input clips, in each state.
We perform dynamic programming over all the candidate states to plan the optimal permutation of states, i.e., optimization path.
arXiv Detail & Related papers (2022-01-11T16:13:31Z) - Learning to Run with Potential-Based Reward Shaping and Demonstrations
from Video Data [70.540936204654]
"Learning to run" competition was to train a two-legged model of a humanoid body to run in a simulated race course with maximum speed.
All submissions took a tabula rasa approach to reinforcement learning (RL) and were able to produce relatively fast, but not optimal running behaviour.
We demonstrate how data from videos of human running can be used to shape the reward of the humanoid learning agent.
arXiv Detail & Related papers (2020-12-16T09:46:58Z) - RSPNet: Relative Speed Perception for Unsupervised Video Representation
Learning [100.76672109782815]
We study unsupervised video representation learning that seeks to learn both motion and appearance features from unlabeled video only.
It is difficult to construct a suitable self-supervised task to well model both motion and appearance features.
We propose a new way to perceive the playback speed and exploit the relative speed between two video clips as labels.
arXiv Detail & Related papers (2020-10-27T16:42:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.