What Happened Next? Using Deep Learning to Value Defensive Actions in
Football Event-Data
- URL: http://arxiv.org/abs/2106.01786v1
- Date: Thu, 3 Jun 2021 12:18:26 GMT
- Title: What Happened Next? Using Deep Learning to Value Defensive Actions in
Football Event-Data
- Authors: Charbel Merhej, Ryan Beal, Sarvapali Ramchurn (University of
Southampton), Tim Matthews (Sentient Sports)
- Abstract summary: We use deep learning techniques to define a novel metric that values such defensive actions.
By studying the threat of passages of play that preceded them, we are able to value defensive actions.
Our model is able to predict the impact of defensive defenders using event-data.
- Score: 1.290382979353427
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Objectively quantifying the value of player actions in football (soccer) is a
challenging problem. To date, studies in football analytics have mainly focused
on the attacking side of the game, while there has been less work on
event-driven metrics for valuing defensive actions (e.g., tackles and
interceptions). Therefore in this paper, we use deep learning techniques to
define a novel metric that values such defensive actions by studying the threat
of passages of play that preceded them. By doing so, we are able to value
defensive actions based on what they prevented from happening in the game. Our
Defensive Action Expected Threat (DAxT) model has been validated using
real-world event-data from the 2017/2018 and 2018/2019 English Premier League
seasons, and we combine our model outputs with additional features to derive an
overall rating of defensive ability for players. Overall, we find that our
model is able to predict the impact of defensive actions allowing us to better
value defenders using event-data.
Related papers
- Defense Against Prompt Injection Attack by Leveraging Attack Techniques [66.65466992544728]
Large language models (LLMs) have achieved remarkable performance across various natural language processing (NLP) tasks.
As LLMs continue to evolve, new vulnerabilities, especially prompt injection attacks arise.
Recent attack methods leverage LLMs' instruction-following abilities and their inabilities to distinguish instructions injected in the data content.
arXiv Detail & Related papers (2024-11-01T09:14:21Z) - Expected Possession Value of Control and Duel Actions for Soccer Player's Skills Estimation [0.0]
This paper introduces multiple extensions to a widely used model, expected possession value (EPV)
We assign greater weights to events occurring immediately prior to the shot rather than those preceding them (decay effect)
Our model incorporates possession risk more accurately by considering the decay effect and effective playing time.
arXiv Detail & Related papers (2024-06-02T17:29:42Z) - Engineering Features to Improve Pass Prediction in Soccer Simulation 2D
Games [0.0]
Soccer Simulation 2D (SS2D) is a simulation of a real soccer game in two dimensions.
We have tried to address the modeling of passing behavior of soccer 2D players using Deep Neural Networks (DNN) and Random Forest (RF)
We evaluate the trained models' performance playing against 6 top teams of RoboCup 2019 that have distinctive playing strategies.
arXiv Detail & Related papers (2024-01-07T08:01:25Z) - Rethinking Backdoor Attacks [122.1008188058615]
In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a training set to make the resulting model vulnerable to manipulation.
Defending against such attacks typically involves viewing these inserted examples as outliers in the training set and using techniques from robust statistics to detect and remove them.
We show that without structural information about the training data distribution, backdoor attacks are indistinguishable from naturally-occurring features in the data.
arXiv Detail & Related papers (2023-07-19T17:44:54Z) - The Best Defense is a Good Offense: Adversarial Augmentation against
Adversarial Attacks [91.56314751983133]
$A5$ is a framework to craft a defensive perturbation to guarantee that any attack towards the input in hand will fail.
We show effective on-the-fly defensive augmentation with a robustifier network that ignores the ground truth label.
We also show how to apply $A5$ to create certifiably robust physical objects.
arXiv Detail & Related papers (2023-05-23T16:07:58Z) - Location analysis of players in UEFA EURO 2020 and 2022 using
generalized valuation of defense by estimating probabilities [0.6946929968559495]
We propose a generalized valuation method of defensive teams by score-scaling the predicted probabilities of the events.
Using the open-source location data of all players in broadcast video frames in football games of men's Euro 2020 and women's Euro 2022, we investigated the effect of the number of players on the prediction.
arXiv Detail & Related papers (2022-11-30T12:43:11Z) - The Art of Manipulation: Threat of Multi-Step Manipulative Attacks in
Security Games [8.87104231451079]
This paper studies the problem of multi-step manipulative attacks in Stackelberg security games.
A clever attacker attempts to orchestrate its attacks over multiple time steps to mislead the defender's learning of the attacker's behavior.
This attack manipulation eventually influences the defender's patrol strategy towards the attacker's benefit.
arXiv Detail & Related papers (2022-02-27T18:58:15Z) - Adversarial Classification of the Attacks on Smart Grids Using Game
Theory and Deep Learning [27.69899235394942]
This paper proposes a game-theoretic approach to evaluate the variations caused by an attacker on the power measurements.
A zero-sum game is used to model the interactions between the attacker and defender.
arXiv Detail & Related papers (2021-06-06T18:43:28Z) - Fighting Gradients with Gradients: Dynamic Defenses against Adversarial
Attacks [72.59081183040682]
We propose dynamic defenses, to adapt the model and input during testing, by defensive entropy minimization (dent)
dent improves the robustness of adversarially-trained defenses and nominally-trained models against white-box, black-box, and adaptive attacks on CIFAR-10/100 and ImageNet.
arXiv Detail & Related papers (2021-05-18T17:55:07Z) - What Doesn't Kill You Makes You Robust(er): Adversarial Training against
Poisons and Backdoors [57.040948169155925]
We extend the adversarial training framework to defend against (training-time) poisoning and backdoor attacks.
Our method desensitizes networks to the effects of poisoning by creating poisons during training and injecting them into training batches.
We show that this defense withstands adaptive attacks, generalizes to diverse threat models, and incurs a better performance trade-off than previous defenses.
arXiv Detail & Related papers (2021-02-26T17:54:36Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.