From Images to Connections: Can DQN with GNNs learn the Strategic Game
of Hex?
- URL: http://arxiv.org/abs/2311.13414v1
- Date: Wed, 22 Nov 2023 14:20:15 GMT
- Title: From Images to Connections: Can DQN with GNNs learn the Strategic Game
of Hex?
- Authors: Yannik Keller, Jannis Bl\"uml, Gopika Sudhakaran and Kristian Kersting
- Abstract summary: We investigate whether graph neural networks (GNNs) can replace convolutional neural networks (CNNs) in self-play reinforcement learning.
GNNs excel at dealing with long range dependency situations in game states and are less prone to overfitting.
This suggests a potential paradigm shift, signaling the use of game-specific structures to reshape self-play reinforcement learning.
- Score: 22.22813915303447
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The gameplay of strategic board games such as chess, Go and Hex is often
characterized by combinatorial, relational structures -- capturing distinct
interactions and non-local patterns -- and not just images. Nonetheless, most
common self-play reinforcement learning (RL) approaches simply approximate
policy and value functions using convolutional neural networks (CNN). A key
feature of CNNs is their relational inductive bias towards locality and
translational invariance. In contrast, graph neural networks (GNN) can encode
more complicated and distinct relational structures. Hence, we investigate the
crucial question: Can GNNs, with their ability to encode complex connections,
replace CNNs in self-play reinforcement learning? To this end, we do a
comparison with Hex -- an abstract yet strategically rich board game -- serving
as our experimental platform. Our findings reveal that GNNs excel at dealing
with long range dependency situations in game states and are less prone to
overfitting, but also showing a reduced proficiency in discerning local
patterns. This suggests a potential paradigm shift, signaling the use of
game-specific structures to reshape self-play reinforcement learning.
Related papers
- Enhancing Chess Reinforcement Learning with Graph Representation [21.919003715442074]
We introduce a more general architecture based on Graph Neural Networks (GNN)
We show that this new architecture outperforms previous architectures with a similar number of parameters.
We also show that the model, when trained on a smaller $5times 5$ variant of chess, is able to be quickly fine-tuned to play on regular $8times 8$ chess.
arXiv Detail & Related papers (2024-10-31T09:18:47Z) - How Graph Neural Networks Learn: Lessons from Training Dynamics [80.41778059014393]
We study the training dynamics in function space of graph neural networks (GNNs)
We find that the gradient descent optimization of GNNs implicitly leverages the graph structure to update the learned function.
This finding offers new interpretable insights into when and why the learned GNN functions generalize.
arXiv Detail & Related papers (2023-10-08T10:19:56Z) - Graph Neural Network based Agent in Google Research Football [0.5735035463793007]
Some deep neural networks (CNNs) cannot extract enough information or take too long to obtain enough features from the inputs under specific circumstances of reinforcement learning.
This paper proposes a deep q-learning network (DQN) with a graph neural network (GNN) as its model.
The GNN transforms the input data into a graph which better represents the football players' locations so that it extracts more information of the interactions between different players.
arXiv Detail & Related papers (2022-04-23T21:26:00Z) - MGDCF: Distance Learning via Markov Graph Diffusion for Neural
Collaborative Filtering [96.65234340724237]
We show the equivalence between some state-of-the-art GNN-based CF models and a traditional 1-layer NRL model based on context encoding.
We present Markov Graph Diffusion Collaborative Filtering (MGDCF) to generalize some state-of-the-art GNN-based CF models.
arXiv Detail & Related papers (2022-04-05T17:24:32Z) - Wide and Deep Graph Neural Network with Distributed Online Learning [174.8221510182559]
Graph neural networks (GNNs) are naturally distributed architectures for learning representations from network data.
Online learning can be leveraged to retrain GNNs at testing time to overcome this issue.
This paper develops the Wide and Deep GNN (WD-GNN), a novel architecture that can be updated with distributed online learning mechanisms.
arXiv Detail & Related papers (2021-07-19T23:56:48Z) - Rethinking pooling in graph neural networks [12.168949038217889]
We study the interplay between convolutional layers and the subsequent pooling ones.
In contrast to the common belief, local pooling is not responsible for the success of GNNs on relevant and widely-used benchmarks.
arXiv Detail & Related papers (2020-10-22T03:48:56Z) - Finite Group Equivariant Neural Networks for Games [0.0]
Group equivariant CNNs in existing work create networks which can exploit symmetries to improve learning.
We introduce Finite Group Neural Networks (FGNNs), a method for creating agents with an innate understanding of these board positions.
FGNNs are shown to improve the performance of networks playing checkers (draughts) and can be easily adapted to other games and learning problems.
arXiv Detail & Related papers (2020-09-10T17:46:09Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - Wide and Deep Graph Neural Networks with Distributed Online Learning [175.96910854433574]
Graph neural networks (GNNs) learn representations from network data with naturally distributed architectures.
Online learning can be used to retrain GNNs at testing time, overcoming this issue.
This paper proposes the Wide and Deep GNN (WD-GNN), a novel architecture that can be easily updated with distributed online learning mechanisms.
arXiv Detail & Related papers (2020-06-11T12:48:03Z) - Visual Commonsense R-CNN [102.5061122013483]
We present a novel unsupervised feature representation learning method, Visual Commonsense Region-based Convolutional Neural Network (VC R-CNN)
VC R-CNN serves as an improved visual region encoder for high-level tasks such as captioning and VQA.
We extensively apply VC R-CNN features in prevailing models of three popular tasks: Image Captioning, VQA, and VCR, and observe consistent performance boosts across them.
arXiv Detail & Related papers (2020-02-27T15:51:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.