Evaluating the Effects of AI Directors for Quest Selection
- URL: http://arxiv.org/abs/2410.03733v1
- Date: Mon, 30 Sep 2024 18:16:38 GMT
- Title: Evaluating the Effects of AI Directors for Quest Selection
- Authors: Kristen K. Yu, Matthew Guzdial, Nathan Sturtevant,
- Abstract summary: We focus on AI Directors, systems which can dynamically modify a game, that personalize the player experience to match the player's preference.
Our results show that a non-random AI Director provides a better player experience than a random AI Director.
- Score: 2.3941497253612085
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern commercial games are designed for mass appeal, not for individual players, but there is a unique opportunity in video games to better fit the individual through adapting game elements. In this paper, we focus on AI Directors, systems which can dynamically modify a game, that personalize the player experience to match the player's preference. In the past, some AI Director studies have provided inconclusive results, so their effect on player experience is not clear. We take three AI Directors and directly compare them in a human subject study to test their effectiveness on quest selection. Our results show that a non-random AI Director provides a better player experience than a random AI Director.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Beyond Recommender: An Exploratory Study of the Effects of Different AI
Roles in AI-Assisted Decision Making [48.179458030691286]
We examine three AI roles: Recommender, Analyzer, and Devil's Advocate.
Our results show each role's distinct strengths and limitations in task performance, reliance appropriateness, and user experience.
These insights offer valuable implications for designing AI assistants with adaptive functional roles according to different situations.
arXiv Detail & Related papers (2024-03-04T07:32:28Z) - Toward Human-AI Alignment in Large-Scale Multi-Player Games [24.784173202415687]
We analyze extensive human gameplay data from Xbox's Bleeding Edge (100K+ games)
We find that while human players exhibit variability in fight-flight and explore-exploit behavior, AI players tend towards uniformity.
These stark differences underscore the need for interpretable evaluation, design, and integration of AI in human-aligned applications.
arXiv Detail & Related papers (2024-02-05T22:55:33Z) - DanZero+: Dominating the GuanDan Game through Reinforcement Learning [95.90682269990705]
We develop an AI program for an exceptionally complex and popular card game called GuanDan.
We first put forward an AI program named DanZero for this game.
In order to further enhance the AI's capabilities, we apply policy-based reinforcement learning algorithm to GuanDan.
arXiv Detail & Related papers (2023-12-05T08:07:32Z) - Designing Mixed-Initiative Video Games [0.0]
Snake Story is a mixed-initiative game where players can select AI-generated texts to write a story of a snake by playing a "Snake" like game.
A controlled experiment was conducted to investigate the dynamics of player-AI interactions with and without the game component in the designed interface.
arXiv Detail & Related papers (2023-07-08T01:45:25Z) - Improving Language Model Negotiation with Self-Play and In-Context
Learning from AI Feedback [97.54519989641388]
We study whether multiple large language models (LLMs) can autonomously improve each other in a negotiation game by playing, reflecting, and criticizing.
Only a subset of the language models we consider can self-play and improve the deal price from AI feedback.
arXiv Detail & Related papers (2023-05-17T11:55:32Z) - Generative Personas That Behave and Experience Like Humans [3.611888922173257]
generative AI agents attempt to imitate particular playing behaviors represented as rules, rewards, or human demonstrations.
We extend the notion of behavioral procedural personas to cater for player experience, thus examining generative agents that can both behave and experience their game as humans would.
Our findings suggest that the generated agents exhibit distinctive play styles and experience responses of the human personas they were designed to imitate.
arXiv Detail & Related papers (2022-08-26T12:04:53Z) - AI in Games: Techniques, Challenges and Opportunities [40.86375378643978]
Various game AI systems (AIs) have been developed such as Libratus, OpenAI Five and AlphaStar, beating professional human players.
In this paper, we survey recent successful game AIs, covering board game AIs, card game AIs, first-person shooting game AIs and real time strategy game AIs.
arXiv Detail & Related papers (2021-11-15T09:35:53Z) - Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi [0.0]
We evaluate teams of humans and AI agents in the cooperative card game emphHanabi with both rule-based and learning-based agents.
We find that humans have a clear preference toward a rule-based AI teammate over a state-of-the-art learning-based AI teammate.
arXiv Detail & Related papers (2021-07-15T22:19:15Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z) - Suphx: Mastering Mahjong with Deep Reinforcement Learning [114.68233321904623]
We design an AI for Mahjong, named Suphx, based on deep reinforcement learning with some newly introduced techniques.
Suphx has demonstrated stronger performance than most top human players in terms of stable rank.
This is the first time that a computer program outperforms most top human players in Mahjong.
arXiv Detail & Related papers (2020-03-30T16:18:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.