Modular Procedural Generation for Voxel Maps
- URL: http://arxiv.org/abs/2104.08890v1
- Date: Sun, 18 Apr 2021 16:21:35 GMT
- Title: Modular Procedural Generation for Voxel Maps
- Authors: Adarsh Pyarelal, Aditya Banerjee, Kobus Barnard
- Abstract summary: In this paper, we present mcg, an open-source library to facilitate implementing PCG algorithms for voxel-based environments such as Minecraft.
The library is designed with human-machine teaming research in mind, and thus takes a 'top-down' approach to generation.
The benefits of this approach include rapid, scalable, and efficient development of virtual environments, the ability to control the statistics of the environment at a semantic level, and the ability to generate novel environments in response to player actions in real time.
- Score: 2.6811189633660613
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Task environments developed in Minecraft are becoming increasingly popular
for artificial intelligence (AI) research. However, most of these are currently
constructed manually, thus failing to take advantage of procedural content
generation (PCG), a capability unique to virtual task environments. In this
paper, we present mcg, an open-source library to facilitate implementing PCG
algorithms for voxel-based environments such as Minecraft. The library is
designed with human-machine teaming research in mind, and thus takes a
'top-down' approach to generation, simultaneously generating low and high level
machine-readable representations that are suitable for empirical research.
These can be consumed by downstream AI applications that consider human spatial
cognition. The benefits of this approach include rapid, scalable, and efficient
development of virtual environments, the ability to control the statistics of
the environment at a semantic level, and the ability to generate novel
environments in response to player actions in real time.
Related papers
- EmbodiedCity: A Benchmark Platform for Embodied Agent in Real-world City Environment [38.14321677323052]
Embodied artificial intelligence emphasizes the role of an agent's body in generating human-like behaviors.
In this paper, we construct a benchmark platform for embodied intelligence evaluation in real-world city environments.
arXiv Detail & Related papers (2024-10-12T17:49:26Z) - SERL: A Software Suite for Sample-Efficient Robotic Reinforcement
Learning [85.21378553454672]
We develop a library containing a sample efficient off-policy deep RL method, together with methods for computing rewards and resetting the environment.
We find that our implementation can achieve very efficient learning, acquiring policies for PCB board assembly, cable routing, and object relocation.
These policies achieve perfect or near-perfect success rates, extreme robustness even under perturbations, and exhibit emergent robustness recovery and correction behaviors.
arXiv Detail & Related papers (2024-01-29T10:01:10Z) - Arbitrarily Scalable Environment Generators via Neural Cellular Automata [55.150593161240444]
We show that NCA environment generators maintain consistent, regularized patterns regardless of environment size.
Our method scales a single-agent reinforcement learning policy to arbitrarily large environments with similar patterns.
arXiv Detail & Related papers (2023-10-28T07:30:09Z) - AI planning in the imagination: High-level planning on learned abstract
search spaces [68.75684174531962]
We propose a new method, called PiZero, that gives an agent the ability to plan in an abstract search space that the agent learns during training.
We evaluate our method on multiple domains, including the traveling salesman problem, Sokoban, 2048, the facility location problem, and Pacman.
arXiv Detail & Related papers (2023-08-16T22:47:16Z) - Ghost in the Minecraft: Generally Capable Agents for Open-World
Environments via Large Language Models with Text-based Knowledge and Memory [97.87093169454431]
Ghost in the Minecraft (GITM) is a novel framework that integrates Large Language Models (LLMs) with text-based knowledge and memory.
We develop a set of structured actions and leverage LLMs to generate action plans for the agents to execute.
The resulting LLM-based agent markedly surpasses previous methods, achieving a remarkable improvement of +47.5% in success rate.
arXiv Detail & Related papers (2023-05-25T17:59:49Z) - BEHAVIOR in Habitat 2.0: Simulator-Independent Logical Task Description
for Benchmarking Embodied AI Agents [31.499374840833124]
We bring a subset of BEHAVIOR activities into Habitat 2.0 to benefit from its fast simulation speed.
Inspired by the catalyzing effect that benchmarks have played in the AI fields, the community is looking for new benchmarks for embodied AI.
arXiv Detail & Related papers (2022-06-13T21:37:31Z) - Evaluating Continual Learning Algorithms by Generating 3D Virtual
Environments [66.83839051693695]
Continual learning refers to the ability of humans and animals to incrementally learn over time in a given environment.
We propose to leverage recent advances in 3D virtual environments in order to approach the automatic generation of potentially life-long dynamic scenes with photo-realistic appearance.
A novel element of this paper is that scenes are described in a parametric way, thus allowing the user to fully control the visual complexity of the input stream the agent perceives.
arXiv Detail & Related papers (2021-09-16T10:37:21Z) - Zero-Shot Reinforcement Learning on Graphs for Autonomous Exploration
Under Uncertainty [6.42522897323111]
We present a framework for self-learning a high-performance exploration policy in a single simulation environment.
We propose a novel approach that uses graph neural networks in conjunction with deep reinforcement learning.
arXiv Detail & Related papers (2021-05-11T02:42:17Z) - NLPGym -- A toolkit for evaluating RL agents on Natural Language
Processing Tasks [2.5760935151452067]
We release NLPGym, an open-source Python toolkit that provides interactive textual environments for standard NLP tasks.
We present experimental results for 6 tasks using different RL algorithms which serve as baselines for further research.
arXiv Detail & Related papers (2020-11-16T20:58:35Z) - The Chef's Hat Simulation Environment for Reinforcement-Learning-Based
Agents [54.63186041942257]
We propose a virtual simulation environment that implements the Chef's Hat card game, designed to be used in Human-Robot Interaction scenarios.
This paper provides a controllable and reproducible scenario for reinforcement-learning algorithms.
arXiv Detail & Related papers (2020-03-12T15:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.