Towards optimized actions in critical situations of soccer games with
deep reinforcement learning
- URL: http://arxiv.org/abs/2109.06625v1
- Date: Tue, 14 Sep 2021 12:27:06 GMT
- Title: Towards optimized actions in critical situations of soccer games with
deep reinforcement learning
- Authors: Pegah Rahimian and Afshin Oroojlooy and Laszlo Toka
- Abstract summary: This work proposes a new state representation for the soccer game and a batch reinforcement learning to train a smart policy network.
We perform numerical experiments on the soccer logs made by InStat for 104 European soccer matches.
The results show that in all 104 games, the optimized policy obtains higher rewards than its counterpart in the behavior policy.
- Score: 2.578242050187029
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Soccer is a sparse rewarding game: any smart or careless action in critical
situations can change the result of the match. Therefore players, coaches, and
scouts are all curious about the best action to be performed in critical
situations, such as the times with a high probability of losing ball possession
or scoring a goal. This work proposes a new state representation for the soccer
game and a batch reinforcement learning to train a smart policy network. This
network gets the contextual information of the situation and proposes the
optimal action to maximize the expected goal for the team. We performed
extensive numerical experiments on the soccer logs made by InStat for 104
European soccer matches. The results show that in all 104 games, the optimized
policy obtains higher rewards than its counterpart in the behavior policy.
Besides, our framework learns policies that are close to the expected behavior
in the real world. For instance, in the optimized policy, we observe that some
actions such as foul, or ball out can be sometimes more rewarding than a shot
in specific situations.
Related papers
- Engineering Features to Improve Pass Prediction in Soccer Simulation 2D
Games [0.0]
Soccer Simulation 2D (SS2D) is a simulation of a real soccer game in two dimensions.
We have tried to address the modeling of passing behavior of soccer 2D players using Deep Neural Networks (DNN) and Random Forest (RF)
We evaluate the trained models' performance playing against 6 top teams of RoboCup 2019 that have distinctive playing strategies.
arXiv Detail & Related papers (2024-01-07T08:01:25Z) - All by Myself: Learning Individualized Competitive Behaviour with a
Contrastive Reinforcement Learning optimization [57.615269148301515]
In a competitive game scenario, a set of agents have to learn decisions that maximize their goals and minimize their adversaries' goals at the same time.
We propose a novel model composed of three neural layers that learn a representation of a competitive game, learn how to map the strategy of specific opponents, and how to disrupt them.
Our experiments demonstrate that our model achieves better performance when playing against offline, online, and competitive-specific models, in particular when playing against the same opponent multiple times.
arXiv Detail & Related papers (2023-10-02T08:11:07Z) - Action valuation of on- and off-ball soccer players based on multi-agent
deep reinforcement learning [4.477124009148237]
We propose a method of valuing possible actions for on-temporal and off-ball players in a single holistic framework based on multi-agent deep reinforcement learning.
Our approach can assess how multiple players move continuously throughout the game which is difficult to be discretized or labeled.
arXiv Detail & Related papers (2023-05-29T05:14:36Z) - ApproxED: Approximate exploitability descent via learned best responses [61.17702187957206]
We study the problem of finding an approximate Nash equilibrium of games with continuous action sets.
We propose two new methods that minimize an approximation of exploitability with respect to the strategy profile.
arXiv Detail & Related papers (2023-01-20T23:55:30Z) - A Graph-Based Method for Soccer Action Spotting Using Unsupervised
Player Classification [75.93186954061943]
Action spotting involves understanding the dynamics of the game, the complexity of events, and the variation of video sequences.
In this work, we focus on the former by (a) identifying and representing the players, referees, and goalkeepers as nodes in a graph, and by (b) modeling their temporal interactions as sequences of graphs.
For the player identification task, our method obtains an overall performance of 57.83% average-mAP by combining it with other modalities.
arXiv Detail & Related papers (2022-11-22T15:23:53Z) - Basis for Intentions: Efficient Inverse Reinforcement Learning using
Past Experience [89.30876995059168]
inverse reinforcement learning (IRL) -- inferring the reward function of an agent from observing its behavior.
This paper addresses the problem of IRL -- inferring the reward function of an agent from observing its behavior.
arXiv Detail & Related papers (2022-08-09T17:29:49Z) - Leaving Goals on the Pitch: Evaluating Decision Making in Soccer [21.85419069962932]
We propose a generic framework to reason about decision-making in soccer by combining techniques from machine learning and artificial intelligence (AI)
Our key conclusion is that teams would score more goals if they shot more often from outside the penalty box in a small number of team-specific locations.
arXiv Detail & Related papers (2021-04-07T16:56:31Z) - An analysis of Reinforcement Learning applied to Coach task in IEEE Very
Small Size Soccer [2.5400028272658144]
This paper proposes an end-to-end approach for the coaching task based on Reinforcement Learning (RL)
We trained two RL policies against three different teams in a simulated environment.
Our results were assessed against one of the top teams of the VSSS league.
arXiv Detail & Related papers (2020-11-23T23:10:06Z) - Game Plan: What AI can do for Football, and What Football can do for AI [83.79507996785838]
Predictive and prescriptive football analytics require new developments and progress at the intersection of statistical learning, game theory, and computer vision.
We illustrate that football analytics is a game changer of tremendous value, in terms of not only changing the game of football itself, but also in terms of what this domain can mean for the field of AI.
arXiv Detail & Related papers (2020-11-18T10:26:02Z) - Optimising Game Tactics for Football [18.135001427294032]
We present a novel approach to optimise tactical and strategic decision making in football (soccer)
We model the game of football as a multi-stage game which is made up from a Bayesian game to model the pre-match decisions and the game to model the in-match state transitions and decisions.
Building upon this, we develop algorithms to optimise team formation and in-game tactics with different objectives.
arXiv Detail & Related papers (2020-03-23T14:24:45Z) - Efficient exploration of zero-sum stochastic games [83.28949556413717]
We investigate the increasingly important and common game-solving setting where we do not have an explicit description of the game but only oracle access to it through gameplay.
During a limited-duration learning phase, the algorithm can control the actions of both players in order to try to learn the game and how to play it well.
Our motivation is to quickly learn strategies that have low exploitability in situations where evaluating the payoffs of a queried strategy profile is costly.
arXiv Detail & Related papers (2020-02-24T20:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.