Understanding Game-Playing Agents with Natural Language Annotations
- URL: http://arxiv.org/abs/2204.07531v1
- Date: Fri, 15 Apr 2022 16:11:08 GMT
- Title: Understanding Game-Playing Agents with Natural Language Annotations
- Authors: Nicholas Tomlin, Andre He, Dan Klein
- Abstract summary: We present a new dataset containing 10K human-annotated games of Go.
We show how these natural language annotations can be used as a tool for model interpretability.
- Score: 34.66200889614538
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a new dataset containing 10K human-annotated games of Go and show
how these natural language annotations can be used as a tool for model
interpretability. Given a board state and its associated comment, our approach
uses linear probing to predict mentions of domain-specific terms (e.g., ko,
atari) from the intermediate state representations of game-playing agents like
AlphaGo Zero. We find these game concepts are nontrivially encoded in two
distinct policy networks, one trained via imitation learning and another
trained via reinforcement learning. Furthermore, mentions of domain-specific
terms are most easily predicted from the later layers of both models,
suggesting that these policy networks encode high-level abstractions similar to
those used in the natural language annotations.
Related papers
- Disentangling Dense Embeddings with Sparse Autoencoders [0.0]
Sparse autoencoders (SAEs) have shown promise in extracting interpretable features from complex neural networks.
We present one of the first applications of SAEs to dense text embeddings from large language models.
We show that the resulting sparse representations maintain semantic fidelity while offering interpretability.
arXiv Detail & Related papers (2024-08-01T15:46:22Z) - player2vec: A Language Modeling Approach to Understand Player Behavior in Games [2.2216044069240657]
Methods for learning latent user representations from historical behavior logs have gained traction for recommendation tasks in e-commerce, content streaming, and other settings.
We present a novel method for overcoming this limitation by extending a long-range Transformer model to player behavior data.
We discuss specifics of behavior tracking in games and propose preprocessing and tokenization approaches by viewing in-game events in an analogous way to words in sentences.
arXiv Detail & Related papers (2024-04-05T17:29:47Z) - Learning Symbolic Rules over Abstract Meaning Representations for
Textual Reinforcement Learning [63.148199057487226]
We propose a modular, NEuroSymbolic Textual Agent (NESTA) that combines a generic semantic generalization with a rule induction system to learn interpretable rules as policies.
Our experiments show that the proposed NESTA method outperforms deep reinforcement learning-based techniques by achieving better to unseen test games and learning from fewer training interactions.
arXiv Detail & Related papers (2023-07-05T23:21:05Z) - Bidirectional Representations for Low Resource Spoken Language
Understanding [39.208462511430554]
We propose a representation model to encode speech in bidirectional rich encodings.
The approach uses a masked language modelling objective to learn the representations.
We show that the performance of the resulting encodings is better than comparable models on multiple datasets.
arXiv Detail & Related papers (2022-11-24T17:05:16Z) - Transparency Helps Reveal When Language Models Learn Meaning [71.96920839263457]
Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations, both autoregressive and masked language models learn to emulate semantic relations between expressions.
Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not well-represent natural language semantics.
arXiv Detail & Related papers (2022-10-14T02:35:19Z) - Linking Emergent and Natural Languages via Corpus Transfer [98.98724497178247]
We propose a novel way to establish a link by corpus transfer between emergent languages and natural languages.
Our approach showcases non-trivial transfer benefits for two different tasks -- language modeling and image captioning.
We also introduce a novel metric to predict the transferability of an emergent language by translating emergent messages to natural language captions grounded on the same images.
arXiv Detail & Related papers (2022-03-24T21:24:54Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - ALICE: Active Learning with Contrastive Natural Language Explanations [69.03658685761538]
We propose Active Learning with Contrastive Explanations (ALICE) to improve data efficiency in learning.
ALICE learns to first use active learning to select the most informative pairs of label classes to elicit contrastive natural language explanations.
It extracts knowledge from these explanations using a semantically extracted knowledge.
arXiv Detail & Related papers (2020-09-22T01:02:07Z) - Grounded Language Learning Fast and Slow [23.254765095715054]
We show that an embodied agent can exhibit similar one-shot word learning when trained with conventional reinforcement learning algorithms.
We find that, under certain training conditions, the agent's one-shot word-object binding generalizes to novel exemplars within the same ShapeNet category.
We further show how dual-coding memory can be exploited as a signal for intrinsic motivation, stimulating the agent to seek names for objects that may be useful for executing later instructions.
arXiv Detail & Related papers (2020-09-03T14:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.