A Foundation Model for Soccer
- URL: http://arxiv.org/abs/2407.14558v1
- Date: Thu, 18 Jul 2024 15:42:08 GMT
- Title: A Foundation Model for Soccer
- Authors: Ethan Baron, Daniel Hocevar, Zach Salehe,
- Abstract summary: We propose a foundation model for soccer, which is able to predict subsequent actions in a soccer match from a given input sequence of actions.
As a proof of concept, we train a transformer architecture on three seasons of data from a professional soccer league.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a foundation model for soccer, which is able to predict subsequent actions in a soccer match from a given input sequence of actions. As a proof of concept, we train a transformer architecture on three seasons of data from a professional soccer league. We quantitatively and qualitatively compare the performance of this transformer architecture to two baseline models: a Markov model and a multi-layer perceptron. Additionally, we discuss potential applications of our model. We provide an open-source implementation of our methods at https://github.com/danielhocevar/Foundation-Model-for-Soccer.
Related papers
- SMGDiff: Soccer Motion Generation using diffusion probabilistic models [44.54275548434197]
Soccer is a globally renowned sport with significant applications in video games and VR/AR.
In this paper, we introduce SMGDiff, a novel two-stage framework for generating real-time and user-controllable soccer motions.
Our key idea is to integrate real-time character control with a powerful diffusion-based generative model, ensuring high-quality and diverse output motion.
arXiv Detail & Related papers (2024-11-25T09:25:53Z) - Apple Intelligence Foundation Language Models [109.60033785567484]
This report describes the model architecture, the data used to train the model, the training process, and the evaluation results.
We highlight our focus on Responsible AI and how the principles are applied throughout the model development.
arXiv Detail & Related papers (2024-07-29T18:38:49Z) - Forecasting Events in Soccer Matches Through Language [0.7373617024876725]
This paper introduces an approach to predicting the next event in a soccer match.
It bears remarkable similarities to the problem faced by Large Language Models (LLMs)
arXiv Detail & Related papers (2024-02-09T23:02:57Z) - Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities [56.666806962214565]
We propose to improve transformers of a specific modality with irrelevant data from other modalities.
We use an auxiliary transformer trained with data of another modality and construct pathways to connect components of the two models.
We observe significant and consistent performance improvements with irrelevant data from other modalities.
arXiv Detail & Related papers (2024-01-25T18:59:58Z) - FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects [55.77542145604758]
FoundationPose is a unified foundation model for 6D object pose estimation and tracking.
Our approach can be instantly applied at test-time to a novel object without fine-tuning.
arXiv Detail & Related papers (2023-12-13T18:28:09Z) - UnIVAL: Unified Model for Image, Video, Audio and Language Tasks [105.77733287326308]
UnIVAL model goes beyond two modalities and unifies text, images, video, and audio into a single model.
Our model is efficiently pretrained on many tasks, based on task balancing and multimodal curriculum learning.
Thanks to the unified model, we propose a novel study on multimodal model merging via weight generalization.
arXiv Detail & Related papers (2023-07-30T09:48:36Z) - TaCA: Upgrading Your Visual Foundation Model with Task-agnostic
Compatible Adapter [21.41170708560114]
A growing number of applications based on visual foundation models are emerging.
In situations involving system upgrades, it becomes essential to re-train all downstream modules to adapt to the new foundation model.
We introduce a parameter-efficient and task-agnostic adapter, dubbed TaCA, that facilitates compatibility across distinct foundation models.
arXiv Detail & Related papers (2023-06-22T03:00:24Z) - An Empirical Study of Multimodal Model Merging [148.48412442848795]
Model merging is a technique that fuses multiple models trained on different tasks to generate a multi-task solution.
We conduct our study for a novel goal where we can merge vision, language, and cross-modal transformers of a modality-specific architecture.
We propose two metrics that assess the distance between weights to be merged and can serve as an indicator of the merging outcomes.
arXiv Detail & Related papers (2023-04-28T15:43:21Z) - Fashionformer: A simple, Effective and Unified Baseline for Human
Fashion Segmentation and Recognition [80.74495836502919]
In this work, we focus on joint human fashion segmentation and attribute recognition.
We introduce the object query for segmentation and the attribute query for attribute prediction.
For attribute stream, we design a novel Multi-Layer Rendering module to explore more fine-grained features.
arXiv Detail & Related papers (2022-04-10T11:11:10Z) - Evaluating Soccer Player: from Live Camera to Deep Reinforcement
Learning [0.0]
We will introduce a two-part solution: an open-source Player Tracking model and a new approach to evaluate these players based solely on Deep Reinforcement Learning.
Our tracking model was trained in a supervised fashion on datasets we will also release, and our Evaluation Model relies only on simulations of virtual soccer games.
We term our new approach Expected Discounted Goal (EDG) as it represents the number of goals a team can score or concede from a particular state.
arXiv Detail & Related papers (2021-01-13T23:26:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.