Strategy Extraction in Single-Agent Games
- URL: http://arxiv.org/abs/2305.12623v1
- Date: Mon, 22 May 2023 01:28:59 GMT
- Title: Strategy Extraction in Single-Agent Games
- Authors: Archana Vadakattu, Michelle Blom, Adrian R. Pearce
- Abstract summary: We propose an approach to knowledge transfer using behavioural strategies as a form of transferable knowledge influenced by the human cognitive ability to develop strategies.
We show that our method can identify plausible strategies in three environments: Pacman, Bank Heist and a dungeon-crawling video game.
- Score: 0.19336815376402716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to continuously learn and adapt to new situations is one where
humans are far superior compared to AI agents. We propose an approach to
knowledge transfer using behavioural strategies as a form of transferable
knowledge influenced by the human cognitive ability to develop strategies. A
strategy is defined as a partial sequence of events - where an event is both
the result of an agent's action and changes in state - to reach some predefined
event of interest. This information acts as guidance or a partial solution that
an agent can generalise and use to make predictions about how to handle unknown
observed phenomena. As a first step toward this goal, we develop a method for
extracting strategies from an agent's existing knowledge that can be applied in
multiple contexts. Our method combines observed event frequency information
with local sequence alignment techniques to find patterns of significance that
form a strategy. We show that our method can identify plausible strategies in
three environments: Pacman, Bank Heist and a dungeon-crawling video game. Our
evaluation serves as a promising first step toward extracting knowledge for
generalisation and, ultimately, transfer learning.
Related papers
- Learning Strategy Representation for Imitation Learning in Multi-Agent Games [15.209555810145549]
We introduce the Strategy Representation for Learning (STRIL) framework, which effectively learns strategy representations in multi-agent games.
STRIL is a plug-in method that can be integrated into existing IL algorithms.
We demonstrate the effectiveness of STRIL across competitive multi-agent scenarios, including Two-player Pong, Limit Texas Hold'em, and Connect Four.
arXiv Detail & Related papers (2024-09-28T14:30:17Z) - AnySkill: Learning Open-Vocabulary Physical Skill for Interactive Agents [58.807802111818994]
We propose AnySkill, a novel hierarchical method that learns physically plausible interactions following open-vocabulary instructions.
Our approach begins by developing a set of atomic actions via a low-level controller trained via imitation learning.
An important feature of our method is the use of image-based rewards for the high-level policy, which allows the agent to learn interactions with objects without manual reward engineering.
arXiv Detail & Related papers (2024-03-19T15:41:39Z) - Learnability Gaps of Strategic Classification [68.726857356532]
We focus on addressing a fundamental question: the learnability gaps between strategic classification and standard learning.
We provide nearly tight sample complexity and regret bounds, offering significant improvements over prior results.
Notably, our algorithm in this setting is of independent interest and can be applied to other problems such as multi-label learning.
arXiv Detail & Related papers (2024-02-29T16:09:19Z) - Fast Peer Adaptation with Context-aware Exploration [63.08444527039578]
We propose a peer identification reward for learning agents in multi-agent games.
This reward motivates the agent to learn a context-aware policy for effective exploration and fast adaptation.
We evaluate our method on diverse testbeds that involve competitive (Kuhn Poker), cooperative (PO-Overcooked), or mixed (Predator-Prey-W) games with peer agents.
arXiv Detail & Related papers (2024-02-04T13:02:27Z) - Strategic Reasoning with Language Models [35.63300060111918]
Strategic reasoning enables agents to cooperate, communicate, and compete with other agents in diverse situations.
Existing approaches to solving strategic games rely on extensive training, yielding strategies that do not generalize to new scenarios or games without retraining.
This paper introduces an approach that uses pretrained Large Language Models with few-shot chain-of-thought examples to enable strategic reasoning for AI agents.
arXiv Detail & Related papers (2023-05-30T16:09:19Z) - Explaining Agent's Decision-making in a Hierarchical Reinforcement
Learning Scenario [0.6643086804649938]
Reinforcement learning is a machine learning approach based on behavioral psychology.
In this work, we make use of the memory-based explainable reinforcement learning method in a hierarchical environment composed of sub-tasks.
arXiv Detail & Related papers (2022-12-14T01:18:45Z) - A Framework for Understanding and Visualizing Strategies of RL Agents [0.0]
We present a framework for learning comprehensible models of sequential decision tasks in which agent strategies are characterized using temporal logic formulas.
We evaluate our framework on combat scenarios in StarCraft II (SC2) using traces from a handcrafted expert policy and a trained reinforcement learning agent.
arXiv Detail & Related papers (2022-08-17T21:58:19Z) - Learning Meta Representations for Agents in Multi-Agent Reinforcement
Learning [12.170248966278281]
In multi-agent reinforcement learning, behaviors that agents learn in a single Markov Game (MG) are typically confined to the given agent number.
In this work, our focus is on creating agents that can generalize across population-varying MGs.
Instead of learning a unimodal policy, each agent learns a policy set comprising effective strategies across a variety of games.
arXiv Detail & Related papers (2021-08-30T04:30:53Z) - PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via
Relabeling Experience and Unsupervised Pre-training [94.87393610927812]
We present an off-policy, interactive reinforcement learning algorithm that capitalizes on the strengths of both feedback and off-policy learning.
We demonstrate that our approach is capable of learning tasks of higher complexity than previously considered by human-in-the-loop methods.
arXiv Detail & Related papers (2021-06-09T14:10:50Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z) - Efficient exploration of zero-sum stochastic games [83.28949556413717]
We investigate the increasingly important and common game-solving setting where we do not have an explicit description of the game but only oracle access to it through gameplay.
During a limited-duration learning phase, the algorithm can control the actions of both players in order to try to learn the game and how to play it well.
Our motivation is to quickly learn strategies that have low exploitability in situations where evaluating the payoffs of a queried strategy profile is costly.
arXiv Detail & Related papers (2020-02-24T20:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.