Scaling Intelligent Agents in Combat Simulations for Wargaming
- URL: http://arxiv.org/abs/2402.06694v1
- Date: Thu, 8 Feb 2024 21:57:10 GMT
- Title: Scaling Intelligent Agents in Combat Simulations for Wargaming
- Authors: Scotty Black, Christian Darken
- Abstract summary: Deep reinforcement learning (RL) continues to show promising results in intelligent agent behavior development in games.
Our research is investigating and extending the use of HRL to create intelligent agents capable of performing effectively in large and complex simulation environments.
Our ultimate goal is to develop an agent capable of superhuman performance that could then serve as an AI advisor to military planners and decision-makers.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Remaining competitive in future conflicts with technologically-advanced
competitors requires us to accelerate our research and development in
artificial intelligence (AI) for wargaming. More importantly, leveraging
machine learning for intelligent combat behavior development will be key to one
day achieving superhuman performance in this domain--elevating the quality and
accelerating the speed of our decisions in future wars. Although deep
reinforcement learning (RL) continues to show promising results in intelligent
agent behavior development in games, it has yet to perform at or above the
human level in the long-horizon, complex tasks typically found in combat
modeling and simulation. Capitalizing on the proven potential of RL and recent
successes of hierarchical reinforcement learning (HRL), our research is
investigating and extending the use of HRL to create intelligent agents capable
of performing effectively in these large and complex simulation environments.
Our ultimate goal is to develop an agent capable of superhuman performance that
could then serve as an AI advisor to military planners and decision-makers.
This papers covers our ongoing approach and the first three of our five
research areas aimed at managing the exponential growth of computations that
have thus far limited the use of AI in combat simulations: (1) developing an
HRL training framework and agent architecture for combat units; (2) developing
a multi-model framework for agent decision-making; (3) developing
dimension-invariant observation abstractions of the state space to manage the
exponential growth of computations; (4) developing an intrinsic rewards engine
to enable long-term planning; and (5) implementing this framework into a
higher-fidelity combat simulation.
Related papers
- Mastering the Digital Art of War: Developing Intelligent Combat Simulation Agents for Wargaming Using Hierarchical Reinforcement Learning [0.0]
dissertation proposes a comprehensive approach, including targeted observation abstractions, multi-model integration, a hybrid AI framework, and an overarching hierarchical reinforcement learning framework.
Our localized observation abstraction using piecewise linear spatial decay simplifies the RL problem, enhancing computational efficiency and demonstrating superior efficacy over traditional global observation methods.
Our hybrid AI framework synergizes RL with scripted agents, leveraging RL for high-level decisions and scripted agents for lower-level tasks, enhancing adaptability, reliability, and performance.
arXiv Detail & Related papers (2024-08-23T18:50:57Z) - Ego-Foresight: Agent Visuomotor Prediction as Regularization for RL [34.6883445484835]
Ego-Foresight is a self-supervised method for disentangling agent and environment based on motion and prediction.
We show that visuomotor prediction of the agent provides regularization to the RL algorithm, by encouraging the actions to stay within predictable bounds.
We integrate Ego-Foresight with a model-free RL algorithm to solve simulated robotic manipulation tasks, showing an average improvement of 23% in efficiency and 8% in performance.
arXiv Detail & Related papers (2024-05-27T13:32:43Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Scaling Artificial Intelligence for Digital Wargaming in Support of
Decision-Making [0.0]
We are developing and implementing a hierarchical reinforcement learning framework that includes a multi-model approach and dimension-invariant observation abstractions.
By advancing AI-enabled systems, we will be able to enhance all-domain awareness, improve the speed and quality of our decision cycles, offer recommendations for novel courses of action, and more rapidly counter our adversary's actions.
arXiv Detail & Related papers (2024-02-08T21:51:07Z) - DanZero+: Dominating the GuanDan Game through Reinforcement Learning [95.90682269990705]
We develop an AI program for an exceptionally complex and popular card game called GuanDan.
We first put forward an AI program named DanZero for this game.
In order to further enhance the AI's capabilities, we apply policy-based reinforcement learning algorithm to GuanDan.
arXiv Detail & Related papers (2023-12-05T08:07:32Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - Human-Level Reinforcement Learning through Theory-Based Modeling,
Exploration, and Planning [27.593497502386143]
Theory-Based Reinforcement Learning uses human-like intuitive theories to explore and model an environment.
We instantiate the approach in a video game playing agent called EMPA.
EMPA matches human learning efficiency on a suite of 90 Atari-style video games.
arXiv Detail & Related papers (2021-07-27T01:38:13Z) - Explore and Control with Adversarial Surprise [78.41972292110967]
Reinforcement learning (RL) provides a framework for learning goal-directed policies given user-specified rewards.
We propose a new unsupervised RL technique based on an adversarial game which pits two policies against each other to compete over the amount of surprise an RL agent experiences.
We show that our method leads to the emergence of complex skills by exhibiting clear phase transitions.
arXiv Detail & Related papers (2021-07-12T17:58:40Z) - Guiding Robot Exploration in Reinforcement Learning via Automated
Planning [6.075903612065429]
Reinforcement learning (RL) enables an agent to learn from trial-and-error experiences toward achieving long-term goals.
automated planning aims to compute plans for accomplishing tasks using action knowledge.
We develop Guided Dyna-Q (GDQ) to enable RL agents to reason with action knowledge to avoid exploring less-relevant states.
arXiv Detail & Related papers (2020-04-23T21:03:30Z) - The Chef's Hat Simulation Environment for Reinforcement-Learning-Based
Agents [54.63186041942257]
We propose a virtual simulation environment that implements the Chef's Hat card game, designed to be used in Human-Robot Interaction scenarios.
This paper provides a controllable and reproducible scenario for reinforcement-learning algorithms.
arXiv Detail & Related papers (2020-03-12T15:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.