baller2vec: A Multi-Entity Transformer For Multi-Agent Spatiotemporal
Modeling
- URL: http://arxiv.org/abs/2102.03291v1
- Date: Fri, 5 Feb 2021 17:02:04 GMT
- Title: baller2vec: A Multi-Entity Transformer For Multi-Agent Spatiotemporal
Modeling
- Authors: Michael A. Alcorn and Anh Nguyen
- Abstract summary: Multi-agenttemporal modeling is a challenging task from both an algorithmic design perspective and computational perspective.
We introduce baller2vec, a multi-entity generalization of the standard Transformer that can simultaneously integrate information across entities and time.
We test the effectiveness of baller2vec for multi-agenttemporal modeling by training it to perform two different basketball-related tasks.
- Score: 17.352818121007576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-agent spatiotemporal modeling is a challenging task from both an
algorithmic design and computational complexity perspective. Recent work has
explored the efficacy of traditional deep sequential models in this domain, but
these architectures are slow and cumbersome to train, particularly as model
size increases. Further, prior attempts to model interactions between agents
across time have limitations, such as imposing an order on the agents, or
making assumptions about their relationships. In this paper, we introduce
baller2vec, a multi-entity generalization of the standard Transformer that,
with minimal assumptions, can simultaneously and efficiently integrate
information across entities and time. We test the effectiveness of baller2vec
for multi-agent spatiotemporal modeling by training it to perform two different
basketball-related tasks: (1) simultaneously forecasting the trajectories of
all players on the court and (2) forecasting the trajectory of the ball. Not
only does baller2vec learn to perform these tasks well, it also appears to
"understand" the game of basketball, encoding idiosyncratic qualities of
players in its embeddings, and performing basketball-relevant functions with
its attention heads.
Related papers
- TranSPORTmer: A Holistic Approach to Trajectory Understanding in Multi-Agent Sports [28.32714256545306]
TranSPORTmer is a unified transformer-based framework capable of addressing all these tasks.
It effectively captures temporal dynamics and social interactions in an equivariant manner.
It outperforms state-of-the-art task-specific models in player forecasting, player forecasting-imputation, ball inference, and ball imputation.
arXiv Detail & Related papers (2024-10-23T11:35:44Z) - DeTra: A Unified Model for Object Detection and Trajectory Forecasting [68.85128937305697]
Our approach formulates the union of the two tasks as a trajectory refinement problem.
To tackle this unified task, we design a refinement transformer that infers the presence, pose, and multi-modal future behaviors of objects.
In our experiments, we observe that ourmodel outperforms the state-of-the-art on Argoverse 2 Sensor and Open dataset.
arXiv Detail & Related papers (2024-06-06T18:12:04Z) - Deciphering Movement: Unified Trajectory Generation Model for Multi-Agent [53.637837706712794]
We propose a Unified Trajectory Generation model, UniTraj, that processes arbitrary trajectories as masked inputs.
Specifically, we introduce a Ghost Spatial Masking (GSM) module embedded within a Transformer encoder for spatial feature extraction.
We benchmark three practical sports game datasets, Basketball-U, Football-U, and Soccer-U, for evaluation.
arXiv Detail & Related papers (2024-05-27T22:15:23Z) - Ball Trajectory Inference from Multi-Agent Sports Contexts Using Set
Transformer and Hierarchical Bi-LSTM [18.884300680050316]
This paper proposes an inference framework of ball trajectory from player trajectories as a cost-efficient alternative to ball tracking.
The experimental results show that our model provides natural and accurate trajectories as well as admissible player ball possession at the same time.
We suggest several practical applications of our framework including missing trajectory imputation, semi-automated pass annotation, automated zoom-in for match broadcasting, and calculating possession-wise running performance metrics.
arXiv Detail & Related papers (2023-06-14T02:19:59Z) - MADiff: Offline Multi-agent Learning with Diffusion Models [79.18130544233794]
Diffusion model (DM) recently achieved huge success in various scenarios including offline reinforcement learning.
We propose MADiff, a novel generative multi-agent learning framework to tackle this problem.
Our experiments show the superior performance of MADiff compared to baseline algorithms in a wide range of multi-agent learning tasks.
arXiv Detail & Related papers (2023-05-27T02:14:09Z) - Joint Spatial-Temporal and Appearance Modeling with Transformer for
Multiple Object Tracking [59.79252390626194]
We propose a novel solution named TransSTAM, which leverages Transformer to model both the appearance features of each object and the spatial-temporal relationships among objects.
The proposed method is evaluated on multiple public benchmarks including MOT16, MOT17, and MOT20, and it achieves a clear performance improvement in both IDF1 and HOTA.
arXiv Detail & Related papers (2022-05-31T01:19:18Z) - Controllable Dynamic Multi-Task Architectures [92.74372912009127]
We propose a controllable multi-task network that dynamically adjusts its architecture and weights to match the desired task preference as well as the resource constraints.
We propose a disentangled training of two hypernetworks, by exploiting task affinity and a novel branching regularized loss, to take input preferences and accordingly predict tree-structured models with adapted weights.
arXiv Detail & Related papers (2022-03-28T17:56:40Z) - baller2vec++: A Look-Ahead Multi-Entity Transformer For Modeling
Coordinated Agents [17.352818121007576]
We introduce baller2vec++, a multi-entity Transformer that can effectively model coordinated agents.
We show that baller2vec++ can learn to emulate the behavior of perfectly coordinated agents in a simulated toy dataset.
arXiv Detail & Related papers (2021-04-24T16:20:47Z) - UPDeT: Universal Multi-agent Reinforcement Learning via Policy
Decoupling with Transformers [108.92194081987967]
We make the first attempt to explore a universal multi-agent reinforcement learning pipeline, designing one single architecture to fit tasks.
Unlike previous RNN-based models, we utilize a transformer-based model to generate a flexible policy.
The proposed model, named as Universal Policy Decoupling Transformer (UPDeT), further relaxes the action restriction and makes the multi-agent task's decision process more explainable.
arXiv Detail & Related papers (2021-01-20T07:24:24Z) - A Graph Attention Based Approach for Trajectory Prediction in
Multi-agent Sports Games [4.29972694729078]
We propose a spatial-temporal trajectory prediction approach that is able to learn the strategy of a team with multiple coordinated agents.
In particular, we use graph-based attention model to learn the dependency of the agents.
We demonstrate the validation and effectiveness of our approach on two different sports game datasets.
arXiv Detail & Related papers (2020-12-18T21:51:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.