Improving Chess Commentaries by Combining Language Models with Symbolic
Reasoning Engines
- URL: http://arxiv.org/abs/2212.08195v1
- Date: Thu, 15 Dec 2022 23:38:31 GMT
- Title: Improving Chess Commentaries by Combining Language Models with Symbolic
Reasoning Engines
- Authors: Andrew Lee, David Wu, Emily Dinan, Mike Lewis
- Abstract summary: We show how to combine symbolic reasoning engines with controllable language models to generate chess commentaries.
We conduct experiments to demonstrate that our approach generates commentaries preferred by human judges over previous baselines.
- Score: 31.87260568733666
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite many recent advancements in language modeling, state-of-the-art
language models lack grounding in the real world and struggle with tasks
involving complex reasoning. Meanwhile, advances in the symbolic reasoning
capabilities of AI have led to systems that outperform humans in games like
chess and Go (Silver et al., 2018). Chess commentary provides an interesting
domain for bridging these two fields of research, as it requires reasoning over
a complex board state and providing analyses in natural language. In this work
we demonstrate how to combine symbolic reasoning engines with controllable
language models to generate chess commentaries. We conduct experiments to
demonstrate that our approach generates commentaries that are preferred by
human judges over previous baselines.
Related papers
- Explore the Reasoning Capability of LLMs in the Chess Testbed [45.12891789312405]
We propose improving the reasoning capability of large language models in chess by integrating annotated strategy and tactic.
We finetune the LLaMA-3-8B model and compare it against state-of-the-art commercial language models in the task of selecting better chess moves.
arXiv Detail & Related papers (2024-11-11T01:42:56Z) - Large Language Models on the Chessboard: A Study on ChatGPT's Formal
Language Comprehension and Complex Reasoning Skills [4.138999291282392]
This paper probes the performance of ChatGPT, a sophisticated language model by OpenAI.
We assess ChatGPT's understanding of the chessboard, adherence to chess rules, and strategic decision-making abilities.
Our study also reveals ChatGPT's propensity for a coherent strategy in its gameplay and a noticeable uptick in decision-making assertiveness.
arXiv Detail & Related papers (2023-08-29T08:36:30Z) - From Word Models to World Models: Translating from Natural Language to
the Probabilistic Language of Thought [124.40905824051079]
We propose rational meaning construction, a computational framework for language-informed thinking.
We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought.
We show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings.
We extend our framework to integrate cognitively-motivated symbolic modules.
arXiv Detail & Related papers (2023-06-22T05:14:00Z) - Emergent Communication of Generalizations [13.14792537601313]
We argue that communicating about a single object in a shared visual context is prone to overfitting and does not encourage language useful beyond concrete reference.
We propose games that require communicating generalizations over sets of objects representing abstract visual concepts.
We find that these games greatly improve systematicity and interpretability of the learned languages.
arXiv Detail & Related papers (2021-06-04T19:02:18Z) - Learning Chess Blindfolded: Evaluating Language Models on State Tracking [69.3794549747725]
We consider the task of language modeling for the game of chess.
Unlike natural language, chess notations describe a simple, constrained, and deterministic domain.
We find that transformer language models can learn to track pieces and predict legal moves with high accuracy when trained solely on move sequences.
arXiv Detail & Related papers (2021-02-26T01:16:23Z) - Teach me to play, gamer! Imitative learning in computer games via
linguistic description of complex phenomena and decision tree [55.41644538483948]
We present a new machine learning model by imitation based on the linguistic description of complex phenomena.
The method can be a good alternative to design and implement the behaviour of intelligent agents in video game development.
arXiv Detail & Related papers (2021-01-06T21:14:10Z) - Deep Reinforcement Learning with Stacked Hierarchical Attention for
Text-based Games [64.11746320061965]
We study reinforcement learning for text-based games, which are interactive simulations in the context of natural language.
We aim to conduct explicit reasoning with knowledge graphs for decision making, so that the actions of an agent are generated and supported by an interpretable inference procedure.
We extensively evaluate our method on a number of man-made benchmark games, and the experimental results demonstrate that our method performs better than existing text-based agents.
arXiv Detail & Related papers (2020-10-22T12:40:22Z) - Incorporating Pragmatic Reasoning Communication into Emergent Language [38.134221799334426]
We study the dynamics of linguistic communication along substantially different intelligence and intelligence levels.
We propose computational models that combine short-term mutual reasoning-based pragmatics with long-term language emergentism.
Our results shed light on their importance for making inroads towards getting more natural, accurate, robust, fine-grained, and succinct utterances.
arXiv Detail & Related papers (2020-06-07T10:31:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.