Fast and the Furious: Hot Starts in Pursuit-Evasion Games
- URL: http://arxiv.org/abs/2510.10830v1
- Date: Sun, 12 Oct 2025 22:46:50 GMT
- Title: Fast and the Furious: Hot Starts in Pursuit-Evasion Games
- Authors: Gabriel Smithline, Scott Nivison,
- Abstract summary: A novel approach that combines game-theoretic control theory with Graph Neural Networks is introduced in this work.<n>By conceptualizing pursuer configurations as strategic arrangements and representing them as graphs, a Graph Characteristic Space is constructed.<n>A Graph Convolutional Network (GCN) is trained to generate strategically effective initial configurations, termed "hot starts"
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effectively positioning pursuers in pursuit-evasion games without prior knowledge of evader locations remains a significant challenge. A novel approach that combines game-theoretic control theory with Graph Neural Networks is introduced in this work. By conceptualizing pursuer configurations as strategic arrangements and representing them as graphs, a Graph Characteristic Space is constructed via multi-objective optimization to identify Pareto-optimal configurations. A Graph Convolutional Network (GCN) is trained on these Pareto-optimal graphs to generate strategically effective initial configurations, termed "hot starts". Empirical evaluations demonstrate that the GCN-generated hot starts provide a significant advantage over random configurations. In scenarios considering multiple pursuers and evaders, this method hastens the decline in evader survival rates, reduces pursuer travel distances, and enhances containment, showcasing clear strategic benefits.
Related papers
- R2PS: Worst-Case Robust Real-Time Pursuit Strategies under Partial Observability [25.176860778665173]
This paper introduces the first approach to worst-case robust real-time pursuit strategies (R2PS) under partial observability.<n>We first prove that a traditional dynamic programming (DP) algorithm for solving Markov PEGs maintains optimality under the asynchronous moves by the evader.<n>We then propose a belief preservation mechanism about the evader's possible positions, extending the DP pursuit strategies to a partially observable setting.
arXiv Detail & Related papers (2025-11-21T16:34:00Z) - Towards Pre-trained Graph Condensation via Optimal Transport [52.6504753271008]
Graph condensation aims to distill the original graph into a small-scale graph, mitigating redundancy and accelerating GNN training.<n> conventional GC approaches heavily rely on rigid GNNs and task-specific supervision.<n>Pre-trained Graph Condensation (PreGC) via optimal transport is proposed to transcend the limitations of task- and architecture-dependent GC methods.
arXiv Detail & Related papers (2025-09-18T08:13:24Z) - Reinforcement Learning for Game-Theoretic Resource Allocation on Graphs [9.369330148791201]
Game-theoretic resource allocation on graphs (GRAG) is a problem modeled as a multi-step Colonel Blotto Game (MCBG)<n>We formulate the MCBG as a Markov Decision Process (MDP) and apply Reinforcement Learning (RL) methods, specifically Deep Q-Network (DQN) and Proximal Policy Optimization (PPO)<n>We evaluate RL performance across a variety of graph structures and initial resource distributions, comparing against random, greedy, and learned RL policies.
arXiv Detail & Related papers (2025-05-08T21:12:34Z) - Compile Scene Graphs with Reinforcement Learning [69.36723767339001]
Next-token prediction is the fundamental principle for training large language models (LLMs)<n>We introduce R1-SGG, a multimodal LLM (M-LLM) initially trained via supervised fine-tuning (SFT) on the scene graph dataset.<n>We design a set of graph-centric rewards, including three recall-based variants -- Hard Recall, Hard Recall+Relax, and Soft Recall.
arXiv Detail & Related papers (2025-04-18T10:46:22Z) - RELIEF: Reinforcement Learning Empowered Graph Feature Prompt Tuning [15.385771185777626]
"Pre-train, prompt" paradigm has recently extended its generalization ability and data efficiency to graph representation learning.<n>We propose RELIEF, which employs Reinforcement Learning (RL) to optimize it.
arXiv Detail & Related papers (2024-08-06T13:55:51Z) - Landscape-Aware Growing: The Power of a Little LAG [49.897766925371485]
We study the question of how to select the best growing strategy from a given pool of growing strategies.
We present an alternative perspective based on early training dynamics, which we call "landscape-aware growing (LAG)"
arXiv Detail & Related papers (2024-06-04T16:38:57Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Learning Decentralized Strategies for a Perimeter Defense Game with
Graph Neural Networks [111.9039128130633]
We design a graph neural network-based learning framework to learn a mapping from defenders' local perceptions and the communication graph to defenders' actions.
We demonstrate that our proposed networks stay closer to the expert policy and are superior to other baseline algorithms by capturing more intruders.
arXiv Detail & Related papers (2022-09-24T22:48:51Z) - Distributed Task Management in Fog Computing: A Socially Concave Bandit
Game [7.708904950194129]
Fog computing leverages the task offloading capabilities at the network's edge to improve efficiency and enable swift responses to application demands.
We formulate the distributed task allocation problem as a social-concave game with bandit feedback.
We develop two no-regret online decision-making strategies.
arXiv Detail & Related papers (2022-03-28T08:26:14Z) - Edge Rewiring Goes Neural: Boosting Network Resilience via Policy
Gradient [62.660451283548724]
ResiNet is a reinforcement learning framework to discover resilient network topologies against various disasters and attacks.
We show that ResiNet achieves a near-optimal resilience gain on multiple graphs while balancing the utility, with a large margin compared to existing approaches.
arXiv Detail & Related papers (2021-10-18T06:14:28Z) - Visibility Optimization for Surveillance-Evasion Games [4.454557728745761]
We consider surveillance-evasion differential games, where a pursuer must try to constantly maintain visibility of a moving evader.
We use an upwind scheme to compute the feedback value function, corresponding to the end-game time of the differential game.
We show that Monte Carlo tree search and self-play reinforcement learning can train a deep neural network to generate reasonable strategies for on-line game play.
arXiv Detail & Related papers (2020-10-18T15:02:41Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.