Combining Evolutionary Search with Behaviour Cloning for Procedurally
Generated Content
- URL: http://arxiv.org/abs/2207.14772v1
- Date: Fri, 29 Jul 2022 16:25:52 GMT
- Title: Combining Evolutionary Search with Behaviour Cloning for Procedurally
Generated Content
- Authors: Nicholas Muir, Steven James
- Abstract summary: We consider the problem of procedural content generation for video game levels.
Prior approaches have relied on evolutionary search (ES) methods capable of generating diverse levels.
We propose a framework to tackle the procedural content generation problem that combines the best of ES and RL.
- Score: 2.7412662946127755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we consider the problem of procedural content generation for
video game levels. Prior approaches have relied on evolutionary search (ES)
methods capable of generating diverse levels, but this generation procedure is
slow, which is problematic in real-time settings. Reinforcement learning (RL)
has also been proposed to tackle the same problem, and while level generation
is fast, training time can be prohibitively expensive. We propose a framework
to tackle the procedural content generation problem that combines the best of
ES and RL. In particular, our approach first uses ES to generate a sequence of
levels evolved over time, and then uses behaviour cloning to distil these
levels into a policy, which can then be queried to produce new levels quickly.
We apply our approach to a maze game and Super Mario Bros, with our results
indicating that our approach does in fact decrease the time required for level
generation, especially when an increasing number of valid levels are required.
Related papers
- Hierarchical Reinforcement Learning for Temporal Abstraction of Listwise Recommendation [51.06031200728449]
We propose a novel framework called mccHRL to provide different levels of temporal abstraction on listwise recommendation.
Within the hierarchical framework, the high-level agent studies the evolution of user perception, while the low-level agent produces the item selection policy.
Results observe significant performance improvement by our method, compared with several well-known baselines.
arXiv Detail & Related papers (2024-09-11T17:01:06Z) - Zero-Shot Reasoning: Personalized Content Generation Without the Cold Start Problem [0.0]
This paper presents a novel approach to achieving personalization by using large language models.
We propose levels based on the gameplay data continuously collected from individual players.
Our method has proven viable in a production setting and outperformed levels generated by traditional methods in the probability that a player will not quit the game mid-level.
arXiv Detail & Related papers (2024-02-15T17:37:25Z) - Action-Quantized Offline Reinforcement Learning for Robotic Skill
Learning [68.16998247593209]
offline reinforcement learning (RL) paradigm provides recipe to convert static behavior datasets into policies that can perform better than the policy that collected the data.
In this paper, we propose an adaptive scheme for action quantization.
We show that several state-of-the-art offline RL methods such as IQL, CQL, and BRAC improve in performance on benchmarks when combined with our proposed discretization scheme.
arXiv Detail & Related papers (2023-10-18T06:07:10Z) - Learning to Generate Levels by Imitating Evolution [7.110423254122942]
We introduce a new type of iterative level generator using machine learning.
We train a model to imitate the evolutionary process and use the model to generate levels.
This trained model is able to modify noisy levels sequentially to create better levels without the need for a fitness function.
arXiv Detail & Related papers (2022-06-11T10:44:57Z) - Procedural Content Generation using Neuroevolution and Novelty Search
for Diverse Video Game Levels [2.320417845168326]
Procedurally generated video game content has the potential to drastically reduce the content creation budget of game developers and large studios.
However, adoption is hindered by limitations such as slow generation, as well as low quality and diversity of content.
We introduce an evolutionary search-based approach for evolving level generators using novelty search to procedurally generate diverse levels in real time.
arXiv Detail & Related papers (2022-04-14T12:54:32Z) - HCV: Hierarchy-Consistency Verification for Incremental
Implicitly-Refined Classification [48.68128465443425]
Human beings learn and accumulate hierarchical knowledge over their lifetime.
Current incremental learning methods lack the ability to build a concept hierarchy by associating new concepts to old ones.
We propose Hierarchy-Consistency Verification (HCV) as an enhancement to existing continual learning methods.
arXiv Detail & Related papers (2021-10-21T13:54:00Z) - Always Be Dreaming: A New Approach for Data-Free Class-Incremental
Learning [73.24988226158497]
We consider the high-impact problem of Data-Free Class-Incremental Learning (DFCIL)
We propose a novel incremental distillation strategy for DFCIL, contributing a modified cross-entropy training and importance-weighted feature distillation.
Our method results in up to a 25.1% increase in final task accuracy (absolute difference) compared to SOTA DFCIL methods for common class-incremental benchmarks.
arXiv Detail & Related papers (2021-06-17T17:56:08Z) - Text Generation with Efficient (Soft) Q-Learning [91.47743595382758]
Reinforcement learning (RL) offers a more flexible solution by allowing users to plug in arbitrary task metrics as reward.
We introduce a new RL formulation for text generation from the soft Q-learning perspective.
We apply the approach to a wide range of tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation.
arXiv Detail & Related papers (2021-06-14T18:48:40Z) - Level Generation for Angry Birds with Sequential VAE and Latent Variable
Evolution [25.262831218008202]
We develop a deep-generative-model-based level generation for the game domain of Angry Birds.
Experiments show that the proposed level generator drastically improves the stability and diversity of generated levels.
arXiv Detail & Related papers (2021-04-13T11:23:39Z) - Phase Retrieval using Expectation Consistent Signal Recovery Algorithm
based on Hypernetwork [73.94896986868146]
Phase retrieval is an important component in modern computational imaging systems.
Recent advances in deep learning have opened up a new possibility for robust and fast PR.
We develop a novel framework for deep unfolding to overcome the existing limitations.
arXiv Detail & Related papers (2021-01-12T08:36:23Z) - PCGRL: Procedural Content Generation via Reinforcement Learning [6.32656340734423]
We investigate how reinforcement learning can be used to train level-designing agents in games.
By seeing the design problem as a sequential task, we can use reinforcement learning to learn how to take the next action.
This approach can be used when few or no examples exist to train from, and the trained generator is very fast.
arXiv Detail & Related papers (2020-01-24T22:09:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.