Policy Fusion for Adaptive and Customizable Reinforcement Learning
Agents
- URL: http://arxiv.org/abs/2104.10610v1
- Date: Wed, 21 Apr 2021 16:08:44 GMT
- Title: Policy Fusion for Adaptive and Customizable Reinforcement Learning
Agents
- Authors: Alessandro Sestini, Alexander Kuhnle, Andrew D. Bagdanov
- Abstract summary: We show how to combine distinct behavioral policies to obtain a meaningful "fusion" policy.
We propose four different policy fusion methods for combining pre-trained policies.
We provide several practical examples and use-cases for how these methods are indeed useful for video game production and designers.
- Score: 137.86426963572214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this article we study the problem of training intelligent agents using
Reinforcement Learning for the purpose of game development. Unlike systems
built to replace human players and to achieve super-human performance, our
agents aim to produce meaningful interactions with the player, and at the same
time demonstrate behavioral traits as desired by game designers. We show how to
combine distinct behavioral policies to obtain a meaningful "fusion" policy
which comprises all these behaviors. To this end, we propose four different
policy fusion methods for combining pre-trained policies. We further
demonstrate how these methods can be used in combination with Inverse
Reinforcement Learning in order to create intelligent agents with specific
behavioral styles as chosen by game designers, without having to define many
and possibly poorly-designed reward functions. Experiments on two different
environments indicate that entropy-weighted policy fusion significantly
outperforms all others. We provide several practical examples and use-cases for
how these methods are indeed useful for video game production and designers.
Related papers
- Aligning Agents like Large Language Models [8.873319874424167]
Training agents to behave as desired in complex 3D environments from high-dimensional sensory information is challenging.
We draw an analogy between the undesirable behaviors of imitation learning agents and the unhelpful responses of unaligned large language models (LLMs)
We demonstrate that we can align our agent to consistently perform the desired mode, while providing insights and advice for successfully applying this approach to training agents.
arXiv Detail & Related papers (2024-06-06T16:05:45Z) - Generating Personas for Games with Multimodal Adversarial Imitation
Learning [47.70823327747952]
Reinforcement learning has been widely successful in producing agents capable of playing games at a human level.
Going beyond reinforcement learning is necessary to model a wide range of human playstyles.
This paper presents a novel imitation learning approach to generate multiple persona policies for playtesting.
arXiv Detail & Related papers (2023-08-15T06:58:19Z) - Explaining Reinforcement Learning Policies through Counterfactual
Trajectories [147.7246109100945]
A human developer must validate that an RL agent will perform well at test-time.
Our method conveys how the agent performs under distribution shifts by showing the agent's behavior across a wider trajectory distribution.
In a user study, we demonstrate that our method enables users to score better than baseline methods on one of two agent validation tasks.
arXiv Detail & Related papers (2022-01-29T00:52:37Z) - Contrastive Explanations for Comparing Preferences of Reinforcement
Learning Agents [16.605295052893986]
In complex tasks where the reward function is not straightforward, multiple reinforcement learning (RL) policies can be trained by adjusting the impact of individual objectives on reward function.
In this work we compare behavior of two policies trained on the same task, but with different preferences in objectives.
We propose a method for distinguishing between differences in behavior that stem from different abilities from those that are a consequence of opposing preferences of two RL agents.
arXiv Detail & Related papers (2021-12-17T11:57:57Z) - Object-Aware Regularization for Addressing Causal Confusion in Imitation
Learning [131.1852444489217]
This paper presents Object-aware REgularizatiOn (OREO), a technique that regularizes an imitation policy in an object-aware manner.
Our main idea is to encourage a policy to uniformly attend to all semantic objects, in order to prevent the policy from exploiting nuisance variables strongly correlated with expert actions.
arXiv Detail & Related papers (2021-10-27T01:56:23Z) - Opponent Learning Awareness and Modelling in Multi-Objective Normal Form
Games [5.0238343960165155]
It is essential for an agent to learn about the behaviour of other agents in the system.
We present the first study of the effects of such opponent modelling on multi-objective multi-agent interactions with non-linear utilities.
arXiv Detail & Related papers (2020-11-14T12:35:32Z) - End-to-End Learning and Intervention in Games [60.41921763076017]
We provide a unified framework for learning and intervention in games.
We propose two approaches, respectively based on explicit and implicit differentiation.
The analytical results are validated using several real-world problems.
arXiv Detail & Related papers (2020-10-26T18:39:32Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.