Go-Explore Complex 3D Game Environments for Automated Reachability
Testing
- URL: http://arxiv.org/abs/2209.00570v1
- Date: Thu, 1 Sep 2022 16:31:37 GMT
- Title: Go-Explore Complex 3D Game Environments for Automated Reachability
Testing
- Authors: Cong Lu, Raluca Georgescu, Johan Verwey
- Abstract summary: We propose an approach specifically targeted at reachability bugs in simulated 3D environments based on the powerful exploration algorithm, Go-Explore.
Go-Explore saves unique checkpoints across the map and then identifies promising ones to explore from.
Our algorithm can fully cover a vast 1.5km x 1.5km game world within 10 hours on a single machine.
- Score: 4.322647881761983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern AAA video games feature huge game levels and maps which are
increasingly hard for level testers to cover exhaustively. As a result, games
often ship with catastrophic bugs such as the player falling through the floor
or being stuck in walls. We propose an approach specifically targeted at
reachability bugs in simulated 3D environments based on the powerful
exploration algorithm, Go-Explore, which saves unique checkpoints across the
map and then identifies promising ones to explore from. We show that when
coupled with simple heuristics derived from the game's navigation mesh,
Go-Explore finds challenging bugs and comprehensively explores complex
environments without the need for human demonstration or knowledge of the game
dynamics. Go-Explore vastly outperforms more complicated baselines including
reinforcement learning with intrinsic curiosity in both covering the navigation
mesh and number of unique positions across the map discovered. Finally, due to
our use of parallel agents, our algorithm can fully cover a vast 1.5km x 1.5km
game world within 10 hours on a single machine making it extremely promising
for continuous testing suites.
Related papers
- WILD-SCAV: Benchmarking FPS Gaming AI on Unity3D-based Environments [5.020816812380825]
Recent advances in deep reinforcement learning (RL) have demonstrated complex decision-making capabilities in simulation environments.
However, they are hardly to more complicated problems, due to the lack of complexity and variations in the environments they are trained and tested on.
We developed WILD-SCAV, a powerful and open-world environment based on a 3D open-world FPS game to bridge the gap.
It provides realistic 3D environments of variable complexity, various tasks, and multiple modes of interaction, where agents can learn to perceive 3D environments, navigate and plan, compete and cooperate in a human-like manner
arXiv Detail & Related papers (2022-10-14T13:39:41Z) - Inspector: Pixel-Based Automated Game Testing via Exploration,
Detection, and Investigation [116.41186277555386]
Inspector is a game testing agent that can be easily applied to different games without deep integration with games.
Inspector is based on purely pixel inputs and comprises three key modules: game space explorer, key object detector, and human-like object investigator.
Experiment results demonstrate the effectiveness of Inspector in exploring game space, detecting key objects, and investigating objects.
arXiv Detail & Related papers (2022-07-18T04:49:07Z) - Learning to Identify Perceptual Bugs in 3D Video Games [1.370633147306388]
We show that it is possible to identify a range of perceptual bugs using learning-based methods.
World of Bugs (WOB) is an open platform for testing ABD methods in 3D game environments.
arXiv Detail & Related papers (2022-02-25T18:50:11Z) - CCPT: Automatic Gameplay Testing and Validation with
Curiosity-Conditioned Proximal Trajectories [65.35714948506032]
The Curiosity-Conditioned Proximal Trajectories (CCPT) method combines curiosity and imitation learning to train agents to explore.
We show how CCPT can explore complex environments, discover gameplay issues and design oversights in the process, and recognize and highlight them directly to game designers.
arXiv Detail & Related papers (2022-02-21T09:08:33Z) - Graph augmented Deep Reinforcement Learning in the GameRLand3D
environment [11.03710870581386]
We introduce a hybrid technique combining a low level policy trained with reinforcement learning and a graph based high level classical planner.
In an in-depth experimental study, we quantify the limitations of end-to-end Deep RL approaches in vast environments.
We also introduce "GameRLand3D", a new benchmark and soon to be released environment can generate complex procedural 3D maps for navigation tasks.
arXiv Detail & Related papers (2021-12-22T08:48:00Z) - Vision-Based Mobile Robotics Obstacle Avoidance With Deep Reinforcement
Learning [49.04274612323564]
Obstacle avoidance is a fundamental and challenging problem for autonomous navigation of mobile robots.
In this paper, we consider the problem of obstacle avoidance in simple 3D environments where the robot has to solely rely on a single monocular camera.
We tackle the obstacle avoidance problem as a data-driven end-to-end deep learning approach.
arXiv Detail & Related papers (2021-03-08T13:05:46Z) - Deep Reinforcement Learning for Navigation in AAA Video Games [7.488317734152585]
In video games, non-player characters (NPCs) are used to enhance the players' experience.
The most popular approach for NPC navigation in the video game industry is to use a navigation mesh (NavMesh)
We propose to use Deep Reinforcement Learning (Deep RL) to learn how to navigate 3D maps using any navigation ability.
arXiv Detail & Related papers (2020-11-09T21:07:56Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z) - Active Visual Information Gathering for Vision-Language Navigation [115.40768457718325]
Vision-language navigation (VLN) is the task of entailing an agent to carry out navigational instructions inside photo-realistic environments.
One of the key challenges in VLN is how to conduct a robust navigation by mitigating the uncertainty caused by ambiguous instructions and insufficient observation of the environment.
This work draws inspiration from human navigation behavior and endows an agent with an active information gathering ability for a more intelligent VLN policy.
arXiv Detail & Related papers (2020-07-15T23:54:20Z) - BADGR: An Autonomous Self-Supervised Learning-Based Navigation System [158.6392333480079]
BadGR is an end-to-end learning-based mobile robot navigation system.
It can be trained with self-supervised off-policy data gathered in real-world environments.
BadGR can navigate in real-world urban and off-road environments with geometrically distracting obstacles.
arXiv Detail & Related papers (2020-02-13T18:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.