Explaining How a Neural Network Play the Go Game and Let People Learn
- URL: http://arxiv.org/abs/2310.09838v1
- Date: Sun, 15 Oct 2023 13:57:50 GMT
- Title: Explaining How a Neural Network Play the Go Game and Let People Learn
- Authors: Huilin Zhou, Huijie Tang, Mingjie Li, Hao Zhang, Zhenyu Liu, Quanshi
Zhang
- Abstract summary: The AI model has surpassed human players in the game of Go.
It is widely believed that the AI model has encoded new knowledge about the Go game beyond human players.
- Score: 26.192580802652742
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The AI model has surpassed human players in the game of Go, and it is widely
believed that the AI model has encoded new knowledge about the Go game beyond
human players. In this way, explaining the knowledge encoded by the AI model
and using it to teach human players represent a promising-yet-challenging issue
in explainable AI. To this end, mathematical supports are required to ensure
that human players can learn accurate and verifiable knowledge, rather than
specious intuitive analysis. Thus, in this paper, we extract interaction
primitives between stones encoded by the value network for the Go game, so as
to enable people to learn from the value network. Experiments show the
effectiveness of our method.
Related papers
- WinoGAViL: Gamified Association Benchmark to Challenge
Vision-and-Language Models [91.92346150646007]
In this work, we introduce WinoGAViL: an online game to collect vision-and-language associations.
We use the game to collect 3.5K instances, finding that they are intuitive for humans but challenging for state-of-the-art AI models.
Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills.
arXiv Detail & Related papers (2022-07-25T23:57:44Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - Instructive artificial intelligence (AI) for human training, assistance,
and explainability [0.24629531282150877]
We show how a neural network might instruct human trainees as an alternative to traditional approaches to explainable AI (XAI)
An AI examines human actions and calculates variations on the human strategy that lead to better performance.
Results will be presented on AI instruction's ability to improve human decision-making and human-AI teaming in Hanabi.
arXiv Detail & Related papers (2021-11-02T16:46:46Z) - The MineRL BASALT Competition on Learning from Human Feedback [58.17897225617566]
The MineRL BASALT competition aims to spur forward research on this important class of techniques.
We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions.
We provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline.
arXiv Detail & Related papers (2021-07-05T12:18:17Z) - AI in (and for) Games [0.9920773256693857]
This chapter outlines the relation between artificial intelligence (AI) / machine learning (ML) algorithms and digital games.
On one hand, AI/ML researchers can generate large, in-the-wild datasets of human affective activity, player behaviour.
On the other hand, games can utilise intelligent algorithms to automate testing of game levels, generate content, develop intelligent and responsive non-player characters (NPCs) or predict and respond player behaviour.
arXiv Detail & Related papers (2021-05-07T08:57:07Z) - Player-AI Interaction: What Neural Network Games Reveal About AI as Play [14.63311356668699]
This paper argues that games are an ideal domain for studying and experimenting with how humans interact with AI.
Through a systematic survey of neural network games, we identified the dominant interaction metaphors and AI interaction patterns.
Our work suggests that game and UX designers should consider flow to structure the learning curve of human-AI interaction.
arXiv Detail & Related papers (2021-01-15T17:07:03Z) - Teach me to play, gamer! Imitative learning in computer games via
linguistic description of complex phenomena and decision tree [55.41644538483948]
We present a new machine learning model by imitation based on the linguistic description of complex phenomena.
The method can be a good alternative to design and implement the behaviour of intelligent agents in video game development.
arXiv Detail & Related papers (2021-01-06T21:14:10Z) - Explainability via Responsibility [0.9645196221785693]
We present an approach to explainable artificial intelligence in which certain training instances are offered to human users.
We evaluate this approach by approximating its ability to provide human users with the explanations of AI agent's actions.
arXiv Detail & Related papers (2020-10-04T20:41:03Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.