"It's Unwieldy and It Takes a Lot of Time." Challenges and Opportunities
for Creating Agents in Commercial Games
- URL: http://arxiv.org/abs/2009.00541v1
- Date: Tue, 1 Sep 2020 16:21:19 GMT
- Title: "It's Unwieldy and It Takes a Lot of Time." Challenges and Opportunities
for Creating Agents in Commercial Games
- Authors: Mikhail Jacob, Sam Devlin, Katja Hofmann
- Abstract summary: Game agents such as opponents, non-player characters, and teammates are central to player experiences in many modern games.
As the landscape of AI techniques used in the games industry evolves to adopt machine learning (ML) more widely, it is vital that the research community learn from the best practices cultivated within the industry over decades creating agents.
We interviewed seventeen game agent creators from AAA studios, indie studios, and industrial research labs about the challenges they experienced with their professional literature.
- Score: 20.63320049616144
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Game agents such as opponents, non-player characters, and teammates are
central to player experiences in many modern games. As the landscape of AI
techniques used in the games industry evolves to adopt machine learning (ML)
more widely, it is vital that the research community learn from the best
practices cultivated within the industry over decades creating agents. However,
although commercial game agent creation pipelines are more mature than those
based on ML, opportunities for improvement still abound. As a foundation for
shared progress identifying research opportunities between researchers and
practitioners, we interviewed seventeen game agent creators from AAA studios,
indie studios, and industrial research labs about the challenges they
experienced with their professional workflows. Our study revealed several open
challenges ranging from design to implementation and evaluation. We compare
with literature from the research community that address the challenges
identified and conclude by highlighting promising directions for future
research supporting agent creation in the games industry.
Related papers
- A Survey on Large Language Model-Based Game Agents [9.892954815419452]
The development of game agents holds a critical role in advancing towards Artificial General Intelligence (AGI)
This paper provides a comprehensive overview of LLM-based game agents from a holistic viewpoint.
arXiv Detail & Related papers (2024-04-02T15:34:18Z) - A Survey on Game Playing Agents and Large Models: Methods, Applications, and Challenges [29.74898680986507]
We review the current landscape of LM usage in regards to complex game playing scenarios and the challenges still open.
We present our perspective on promising future research avenues for the advancement of LMs in games.
arXiv Detail & Related papers (2024-03-15T12:37:12Z) - Visual Encoders for Data-Efficient Imitation Learning in Modern Video
Games [13.241655571625822]
Going beyond Atari games towards training agents in modern games has been prohibitively expensive for the vast majority of the research community.
Recent progress in the research, development and open release of large vision models has the potential to amortize some of these costs across the community.
We present a systematic study of imitation learning with publicly available visual encoders compared to the typical, task-specific, end-to-end training approach in Minecraft, Minecraft Dungeons and Counter-Strike: Global Offensive.
arXiv Detail & Related papers (2023-12-04T19:52:12Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - Technical Challenges of Deploying Reinforcement Learning Agents for Game
Testing in AAA Games [58.720142291102135]
We describe an effort to add an experimental reinforcement learning system to an existing automated game testing solution based on scripted bots.
We show a use-case of leveraging reinforcement learning in game production and cover some of the largest time sinks anyone who wants to make the same journey for their game may encounter.
We propose a few research directions that we believe will be valuable and necessary for making machine learning, and especially reinforcement learning, an effective tool in game production.
arXiv Detail & Related papers (2023-07-19T18:19:23Z) - SPRING: Studying the Paper and Reasoning to Play Games [102.5587155284795]
We propose a novel approach, SPRING, to read the game's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM)
In experiments, we study the quality of in-context "reasoning" induced by different forms of prompts under the setting of the Crafter open-world environment.
Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories.
arXiv Detail & Related papers (2023-05-24T18:14:35Z) - Retrospective on the 2021 BASALT Competition on Learning from Human
Feedback [92.37243979045817]
The goal of the competition was to promote research towards agents that use learning from human feedback (LfHF) techniques to solve open-world tasks.
Rather than mandating the use of LfHF techniques, we described four tasks in natural language to be accomplished in the video game Minecraft.
Teams developed a diverse range of LfHF algorithms across a variety of possible human feedback types.
arXiv Detail & Related papers (2022-04-14T17:24:54Z) - Explore and Control with Adversarial Surprise [78.41972292110967]
Reinforcement learning (RL) provides a framework for learning goal-directed policies given user-specified rewards.
We propose a new unsupervised RL technique based on an adversarial game which pits two policies against each other to compete over the amount of surprise an RL agent experiences.
We show that our method leads to the emergence of complex skills by exhibiting clear phase transitions.
arXiv Detail & Related papers (2021-07-12T17:58:40Z) - Designing a mobile game to generate player data -- lessons learned [2.695466667982714]
We developed a mobile game without the guidance of similar projects.
Research into game balancing and system simulation required an experimental case study.
In creating RPGLitewe learned a series of lessons about effective amateur game development for research purposes.
arXiv Detail & Related papers (2021-01-18T16:16:58Z) - Navigating the Landscape of Multiplayer Games [20.483315340460127]
We show how network measures applied to response graphs of large-scale games enable the creation of a landscape of games.
We illustrate our findings in domains ranging from canonical games to complex empirical games capturing the performance of trained agents pitted against one another.
arXiv Detail & Related papers (2020-05-04T16:58:17Z) - Neural MMO v1.3: A Massively Multiagent Game Environment for Training
and Evaluating Neural Networks [48.5733173329785]
We present Neural MMO, a massively multiagent game environment inspired by MMOs.
We discuss our progress on two more general challenges in multiagent systems engineering for AI research: distributed infrastructure and game IO.
arXiv Detail & Related papers (2020-01-31T18:50:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.