Through the Gaps: Uncovering Tactical Line-Breaking Passes with Clustering
- URL: http://arxiv.org/abs/2506.06666v1
- Date: Sat, 07 Jun 2025 05:08:24 GMT
- Title: Through the Gaps: Uncovering Tactical Line-Breaking Passes with Clustering
- Authors: Oktay Karakuş, Hasan Arkadaş,
- Abstract summary: Line-breaking passes (LBPs) are crucial tactical actions in football, allowing teams to penetrate defensive lines and access high-value spaces.<n>We present an unsupervised, clustering-based framework for detecting and analysing LBPs using synchronised event and tracking data from elite matches.<n>Our approach models opponent team shape through vertical spatial segmentation and identifies passes that disrupt defensive lines within open play.<n>We evaluate these metrics across teams and players in the 2022 FIFA World Cup, revealing stylistic differences in vertical progression and structural disruption.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Line-breaking passes (LBPs) are crucial tactical actions in football, allowing teams to penetrate defensive lines and access high-value spaces. In this study, we present an unsupervised, clustering-based framework for detecting and analysing LBPs using synchronised event and tracking data from elite matches. Our approach models opponent team shape through vertical spatial segmentation and identifies passes that disrupt defensive lines within open play. Beyond detection, we introduce several tactical metrics, including the space build-up ratio (SBR) and two chain-based variants, LBPCh$^1$ and LBPCh$^2$, which quantify the effectiveness of LBPs in generating immediate or sustained attacking threats. We evaluate these metrics across teams and players in the 2022 FIFA World Cup, revealing stylistic differences in vertical progression and structural disruption. The proposed methodology is explainable, scalable, and directly applicable to modern performance analysis and scouting workflows.
Related papers
- TraceGuard: Process-Guided Firewall against Reasoning Backdoors in Large Language Models [19.148124494194317]
We propose TraceGuard, a process-guided security framework that transforms small-scale models into robust reasoning firewalls.<n>Our approach treats the reasoning trace as an untrusted payload and establishes a defense-in-depth strategy.<n>We demonstrate robustness against adaptive adversaries in a grey-box setting, establishing TraceGuard as a viable, low-latency security primitive.
arXiv Detail & Related papers (2026-03-02T22:19:13Z) - Attributing and Exploiting Safety Vectors through Global Optimization in Large Language Models [50.91504059485288]
We propose a framework that identifies safety-critical attention heads through global optimization over all heads simultaneously.<n>We develop a novel inference-time white-box jailbreak method that exploits the identified safety vectors through activation repatching.
arXiv Detail & Related papers (2026-01-22T09:32:43Z) - A Machine Learning Framework for Off Ball Defensive Role and Performance Evaluation in Football [3.418921713486739]
We introduce a co-dependent Hidden Markov Model (CDHMM) tailored to corner kicks in football games.<n>Our model infers time-resolved man-marking and zonal assignments directly from player tracking data.<n>We propose a novel framework for defensive credit attribution and a role-conditioned ghosting method for counterfactual analysis of off-ball defensive performance.
arXiv Detail & Related papers (2026-01-02T17:10:36Z) - Analysis of Line Break prediction models for detecting defensive breakthrough in football [0.0]
In football, attacking teams attempt to break through the opponent's defensive line to create scoring opportunities.<n>This study develops a machine learning model to predict Line Breaks using event and tracking data from the 2023 J1 League season.
arXiv Detail & Related papers (2025-10-31T06:42:20Z) - Chasing Moving Targets with Online Self-Play Reinforcement Learning for Safer Language Models [55.28518567702213]
Conventional language model (LM) safety alignment relies on a reactive, disjoint procedure: attackers exploit a static model, followed by defensive fine-tuning to patch exposed vulnerabilities.<n>This sequential approach creates a mismatch -- attackers overfit to obsolete defenses, while defenders perpetually lag behind emerging threats.<n>We propose Self-RedTeam, an online self-play reinforcement learning algorithm where an attacker and defender agent co-evolve through continuous interaction.
arXiv Detail & Related papers (2025-06-09T06:35:12Z) - Online Competitive Information Gathering for Partially Observable Trajectory Games [24.25139588281181]
Game-theoretic agents must make plans that optimally gather information about their opponents.<n>We formulate a finite history/horizon refinement of POSGs which admits competitive information gathering behavior in trajectory space.<n>We present an online method for computing rational trajectory plans in these games which leverages particle-based estimations of the state space and performs gradient play.
arXiv Detail & Related papers (2025-06-02T17:45:58Z) - Alignment Under Pressure: The Case for Informed Adversaries When Evaluating LLM Defenses [6.736255552371404]
Alignment is one of the main approaches used to defend against attacks such as prompt injection and jailbreaks.<n>Recent defenses report near-zero Attack Success Rates (ASR) even against Greedy Coordinate Gradient (GCG)
arXiv Detail & Related papers (2025-05-21T16:43:17Z) - Attack-Defense Trees with Offensive and Defensive Attributes (with Appendix) [1.360022695699485]
Attack-Defense Trees (ADTs) are a commonly used methodology for representing this interplay.<n>Previous work in this domain has only focused on analyzing metrics such as cost, damage, or time from the perspective of the attacker.<n>In this paper, we propose a novel framework that incorporates defense metrics into ADTs.
arXiv Detail & Related papers (2025-04-17T08:41:07Z) - Unveiling Hidden Pivotal Players with GoalNet: A GNN-Based Soccer Player Evaluation System [8.957579200590988]
Soccer analysis tools emphasize metrics such as expected goals, leading to an overrepresentation of attacking players' contributions.<n>We introduce a GNN-based framework that assigns individual credit for changes in expected threat (xT)<n>Our pipeline encodes both spatial and temporal features in event-centric graphs, enabling fair attribution of non-scoring actions.
arXiv Detail & Related papers (2025-03-12T18:36:55Z) - Toward Optimal LLM Alignments Using Two-Player Games [86.39338084862324]
In this paper, we investigate alignment through the lens of two-agent games, involving iterative interactions between an adversarial and a defensive agent.
We theoretically demonstrate that this iterative reinforcement learning optimization converges to a Nash Equilibrium for the game induced by the agents.
Experimental results in safety scenarios demonstrate that learning in such a competitive environment not only fully trains agents but also leads to policies with enhanced generalization capabilities for both adversarial and defensive agents.
arXiv Detail & Related papers (2024-06-16T15:24:50Z) - Efficient Adversarial Training in LLMs with Continuous Attacks [99.5882845458567]
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails.
We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses.
C-AdvIPO is an adversarial variant of IPO that does not require utility data for adversarially robust alignment.
arXiv Detail & Related papers (2024-05-24T14:20:09Z) - Cooperation or Competition: Avoiding Player Domination for Multi-Target
Robustness via Adaptive Budgets [76.20705291443208]
We view adversarial attacks as a bargaining game in which different players negotiate to reach an agreement on a joint direction of parameter updating.
We design a novel framework that adjusts the budgets of different adversaries to avoid any player dominance.
Experiments on standard benchmarks show that employing the proposed framework to the existing approaches significantly advances multi-target robustness.
arXiv Detail & Related papers (2023-06-27T14:02:10Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic
Segmentation [79.42338812621874]
Adversarial training is promising for improving robustness of deep neural networks towards adversarial perturbations.
We formulate a general adversarial training procedure that can perform decently on both adversarial and clean samples.
We propose a dynamic divide-and-conquer adversarial training (DDC-AT) strategy to enhance the defense effect.
arXiv Detail & Related papers (2020-03-14T05:06:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.