Diversifying AI: Towards Creative Chess with AlphaZero
- URL: http://arxiv.org/abs/2308.09175v2
- Date: Tue, 29 Aug 2023 20:33:12 GMT
- Title: Diversifying AI: Towards Creative Chess with AlphaZero
- Authors: Tom Zahavy, Vivek Veeriah, Shaobo Hou, Kevin Waugh, Matthew Lai,
Edouard Leurent, Nenad Tomasev, Lisa Schut, Demis Hassabis, and Satinder
Singh
- Abstract summary: We study whether a team of diverse AI systems can outperform a single AI in challenging tasks by generating more ideas as a group and then selecting the best ones.
Our experiments suggest that AZ_db plays chess in diverse ways, solves more puzzles as a group and outperforms a more homogeneous team.
Our findings suggest that diversity bonuses emerge in teams of AI agents, just as they do in teams of humans.
- Score: 22.169342583475938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, Artificial Intelligence (AI) systems have surpassed human
intelligence in a variety of computational tasks. However, AI systems, like
humans, make mistakes, have blind spots, hallucinate, and struggle to
generalize to new situations. This work explores whether AI can benefit from
creative decision-making mechanisms when pushed to the limits of its
computational rationality. In particular, we investigate whether a team of
diverse AI systems can outperform a single AI in challenging tasks by
generating more ideas as a group and then selecting the best ones. We study
this question in the game of chess, the so-called drosophila of AI. We build on
AlphaZero (AZ) and extend it to represent a league of agents via a
latent-conditioned architecture, which we call AZ_db. We train AZ_db to
generate a wider range of ideas using behavioral diversity techniques and
select the most promising ones with sub-additive planning. Our experiments
suggest that AZ_db plays chess in diverse ways, solves more puzzles as a group
and outperforms a more homogeneous team. Notably, AZ_db solves twice as many
challenging puzzles as AZ, including the challenging Penrose positions. When
playing chess from different openings, we notice that players in AZ_db
specialize in different openings, and that selecting a player for each opening
using sub-additive planning results in a 50 Elo improvement over AZ. Our
findings suggest that diversity bonuses emerge in teams of AI agents, just as
they do in teams of humans and that diversity is a valuable asset in solving
computationally hard problems.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - Teamwork under extreme uncertainty: AI for Pokemon ranks 33rd in the
world [0.0]
This paper describes the mechanics of the game and we perform a game analysis.
We propose unique AI algorithms based on our understanding that the two biggest challenges in the game are keeping a balanced team and dealing with three sources of uncertainty.
Our AI agent performed significantly better than all previous attempts and peaked at the 33rd place in the world, in one of the most popular battle formats, while running on only 4 single socket servers.
arXiv Detail & Related papers (2022-12-27T01:52:52Z) - AI in Games: Techniques, Challenges and Opportunities [40.86375378643978]
Various game AI systems (AIs) have been developed such as Libratus, OpenAI Five and AlphaStar, beating professional human players.
In this paper, we survey recent successful game AIs, covering board game AIs, card game AIs, first-person shooting game AIs and real time strategy game AIs.
arXiv Detail & Related papers (2021-11-15T09:35:53Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - The MineRL BASALT Competition on Learning from Human Feedback [58.17897225617566]
The MineRL BASALT competition aims to spur forward research on this important class of techniques.
We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions.
We provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline.
arXiv Detail & Related papers (2021-07-05T12:18:17Z) - Hybrid Intelligence [4.508830262248694]
We argue that the most likely paradigm for the division of labor between humans and machines in the next decades is Hybrid Intelligence.
This concept aims at using the complementary strengths of human intelligence and AI, so that they can perform better than each of the two could separately.
arXiv Detail & Related papers (2021-05-03T08:56:09Z) - Elo Ratings for Large Tournaments of Software Agents in Asymmetric Games [0.0]
It is natural to evaluate artificial intelligence agents on the same Elo scale as humans, such as the rating of 5185 attributed to AlphaGo Zero.
There are several fundamental differences between humans and AI that suggest modifications to the system.
We present a revised rating system, and guidelines for tournaments, to reflect these differences.
arXiv Detail & Related papers (2021-04-23T21:49:20Z) - Aligning Superhuman AI with Human Behavior: Chess as a Model System [5.236087378443016]
We develop Maia, a customized version of Alpha-Zero trained on human chess games, that predicts human moves at a much higher accuracy than existing engines.
For a dual task of predicting whether a human will make a large mistake on the next move, we develop a deep neural network that significantly outperforms competitive baselines.
arXiv Detail & Related papers (2020-06-02T18:12:52Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.