Evaluating Environments Using Exploratory Agents
- URL: http://arxiv.org/abs/2409.02632v1
- Date: Wed, 4 Sep 2024 11:51:26 GMT
- Title: Evaluating Environments Using Exploratory Agents
- Authors: Bobby Khaleque, Mike Cook, Jeremy Gow,
- Abstract summary: We investigate the using an exploratory agent to provide feedback on the design of procedurally generated game levels.
Our study showed that our exploratory agent can clearly distinguish between engaging and unengaging levels.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Exploration is a key part of many video games. We investigate the using an exploratory agent to provide feedback on the design of procedurally generated game levels, 5 engaging levels and 5 unengaging levels. We expand upon a framework introduced in previous research which models motivations for exploration and introduce a fitness function for evaluating an environment's potential for exploration. Our study showed that our exploratory agent can clearly distinguish between engaging and unengaging levels. The findings suggest that our agent has the potential to serve as an effective tool for assessing procedurally generated levels, in terms of exploration. This work contributes to the growing field of AI-driven game design by offering new insights into how game environments can be evaluated and optimised for player exploration.
Related papers
- Preference-conditioned Pixel-based AI Agent For Game Testing [1.5059676044537105]
Game-testing AI agents that learn by interaction with the environment have the potential to mitigate these challenges.
This paper proposes an agent design that mainly depends on pixel-based state observations while exploring the environment conditioned on a user's preference.
Our agent significantly outperforms state-of-the-art pixel-based game testing agents over exploration coverage and test execution quality when evaluated on a complex open-world environment resembling many aspects of real AAA games.
arXiv Detail & Related papers (2023-08-18T04:19:36Z) - Embodied Agents for Efficient Exploration and Smart Scene Description [47.82947878753809]
We tackle a setting for visual navigation in which an autonomous agent needs to explore and map an unseen indoor environment.
We propose and evaluate an approach that combines recent advances in visual robotic exploration and image captioning on images.
Our approach can generate smart scene descriptions that maximize semantic knowledge of the environment and avoid repetitions.
arXiv Detail & Related papers (2023-01-17T19:28:01Z) - CCPT: Automatic Gameplay Testing and Validation with
Curiosity-Conditioned Proximal Trajectories [65.35714948506032]
The Curiosity-Conditioned Proximal Trajectories (CCPT) method combines curiosity and imitation learning to train agents to explore.
We show how CCPT can explore complex environments, discover gameplay issues and design oversights in the process, and recognize and highlight them directly to game designers.
arXiv Detail & Related papers (2022-02-21T09:08:33Z) - Interesting Object, Curious Agent: Learning Task-Agnostic Exploration [44.18450799034677]
In this paper, we propose a paradigm change in the formulation and evaluation of task-agnostic exploration.
We show that our formulation is effective and provides the most consistent exploration across several training-testing environment pairs.
arXiv Detail & Related papers (2021-11-25T15:17:32Z) - Long-Term Exploration in Persistent MDPs [68.8204255655161]
We propose an exploration method called Rollback-Explore (RbExplore)
In this paper, we propose an exploration method called Rollback-Explore (RbExplore), which utilizes the concept of the persistent Markov decision process.
We test our algorithm in the hard-exploration Prince of Persia game, without rewards and domain knowledge.
arXiv Detail & Related papers (2021-09-21T13:47:04Z) - Benchmarking the Spectrum of Agent Capabilities [7.088856621650764]
We introduce Crafter, an open world survival game with visual inputs that evaluates a wide range of general abilities within a single environment.
Agents learn from the provided reward signal or through intrinsic objectives and are evaluated by semantically meaningful achievements.
We experimentally verify that Crafter is of appropriate difficulty to drive future research and provide baselines scores of reward agents and unsupervised agents.
arXiv Detail & Related papers (2021-09-14T15:49:31Z) - Learning Affordance Landscapes for Interaction Exploration in 3D
Environments [101.90004767771897]
Embodied agents must be able to master how their environment works.
We introduce a reinforcement learning approach for exploration for interaction.
We demonstrate our idea with AI2-iTHOR.
arXiv Detail & Related papers (2020-08-21T00:29:36Z) - Interactive Evolution and Exploration Within Latent Level-Design Space
of Generative Adversarial Networks [8.091708140619946]
Latent Variable Evolution (LVE) has recently been applied to game levels.
This paper introduces a tool for interactive LVE of tile-based levels for games.
The tool also allows for direct exploration of the latent dimensions, and allows users to play discovered levels.
arXiv Detail & Related papers (2020-03-31T22:52:17Z) - An Exploration of Embodied Visual Exploration [97.21890864063872]
Embodied computer vision considers perception for robots in novel, unstructured environments.
We present a taxonomy for existing visual exploration algorithms and create a standard framework for benchmarking them.
We then perform a thorough empirical study of the four state-of-the-art paradigms using the proposed framework.
arXiv Detail & Related papers (2020-01-07T17:40:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.