Graph Neural Network based Agent in Google Research Football
- URL: http://arxiv.org/abs/2204.11142v1
- Date: Sat, 23 Apr 2022 21:26:00 GMT
- Title: Graph Neural Network based Agent in Google Research Football
- Authors: Yizhan Niu, Jinglong Liu, Yuhao Shi, Jiren Zhu
- Abstract summary: Some deep neural networks (CNNs) cannot extract enough information or take too long to obtain enough features from the inputs under specific circumstances of reinforcement learning.
This paper proposes a deep q-learning network (DQN) with a graph neural network (GNN) as its model.
The GNN transforms the input data into a graph which better represents the football players' locations so that it extracts more information of the interactions between different players.
- Score: 0.5735035463793007
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNN) can approximate value functions or policies for
reinforcement learning, which makes the reinforcement learning algorithms more
powerful. However, some DNNs, such as convolutional neural networks (CNN),
cannot extract enough information or take too long to obtain enough features
from the inputs under specific circumstances of reinforcement learning. For
example, the input data of Google Research Football, a reinforcement learning
environment which trains agents to play football, is the small map of players'
locations. The information is contained not only in the coordinates of players,
but also in the relationships between different players. CNNs can neither
extract enough information nor take too long to train. To address this issue,
this paper proposes a deep q-learning network (DQN) with a graph neural network
(GNN) as its model. The GNN transforms the input data into a graph which better
represents the football players' locations so that it extracts more information
of the interactions between different players. With two GNNs to approximate its
local and target value functions, this DQN allows players to learn from their
experience by using value functions to see the prospective value of each
intended action. The proposed model demonstrated the power of GNN in the
football game by outperforming other DRL models with significantly fewer steps.
Related papers
- CNN2GNN: How to Bridge CNN with GNN [59.42117676779735]
We propose a novel CNN2GNN framework to unify CNN and GNN together via distillation.
The performance of distilled boosted'' two-layer GNN on Mini-ImageNet is much higher than CNN containing dozens of layers such as ResNet152.
arXiv Detail & Related papers (2024-04-23T08:19:08Z) - From Images to Connections: Can DQN with GNNs learn the Strategic Game
of Hex? [22.22813915303447]
We investigate whether graph neural networks (GNNs) can replace convolutional neural networks (CNNs) in self-play reinforcement learning.
GNNs excel at dealing with long range dependency situations in game states and are less prone to overfitting.
This suggests a potential paradigm shift, signaling the use of game-specific structures to reshape self-play reinforcement learning.
arXiv Detail & Related papers (2023-11-22T14:20:15Z) - Poster: Link between Bias, Node Sensitivity and Long-Tail Distribution
in trained DNNs [12.404169549562523]
Training datasets with long-tail distribution pose a challenge for deep neural networks (DNNs)
This work identifies the node bias that leads to a varying sensitivity of the nodes for different output classes.
We support our reasoning using an empirical case study of the networks trained on a real-world dataset.
arXiv Detail & Related papers (2023-03-29T10:49:31Z) - Complex Network for Complex Problems: A comparative study of CNN and
Complex-valued CNN [0.0]
Complex-valued convolutional neural networks (CV-CNN) can preserve the algebraic structure of complex-valued input data.
CV-CNNs have double the number of trainable parameters as real-valued CNNs in terms of the actual number of trainable parameters.
This paper presents a comparative study of CNN, CNNx2 (CNN with double the number of trainable parameters as the CNN), and CV-CNN.
arXiv Detail & Related papers (2023-02-09T11:51:46Z) - Wide and Deep Graph Neural Network with Distributed Online Learning [174.8221510182559]
Graph neural networks (GNNs) are naturally distributed architectures for learning representations from network data.
Online learning can be leveraged to retrain GNNs at testing time to overcome this issue.
This paper develops the Wide and Deep GNN (WD-GNN), a novel architecture that can be updated with distributed online learning mechanisms.
arXiv Detail & Related papers (2021-07-19T23:56:48Z) - Training Graph Neural Networks with 1000 Layers [133.84813995275988]
We study reversible connections, group convolutions, weight tying, and equilibrium models to advance the memory and parameter efficiency of GNNs.
To the best of our knowledge, RevGNN-Deep is the deepest GNN in the literature by one order of magnitude.
arXiv Detail & Related papers (2021-06-14T15:03:00Z) - Graph-Free Knowledge Distillation for Graph Neural Networks [30.38128029453977]
We propose the first dedicated approach to distilling knowledge from a graph neural network without graph data.
The proposed graph-free KD (GFKD) learns graph topology structures for knowledge transfer by modeling them with multinomial distribution.
We provide the strategies for handling different types of prior knowledge in the graph data or the GNNs.
arXiv Detail & Related papers (2021-05-16T21:38:24Z) - A Practical Tutorial on Graph Neural Networks [49.919443059032226]
Graph neural networks (GNNs) have recently grown in popularity in the field of artificial intelligence (AI)
This tutorial exposes the power and novelty of GNNs to AI practitioners.
arXiv Detail & Related papers (2020-10-11T12:36:17Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Wide and Deep Graph Neural Networks with Distributed Online Learning [175.96910854433574]
Graph neural networks (GNNs) learn representations from network data with naturally distributed architectures.
Online learning can be used to retrain GNNs at testing time, overcoming this issue.
This paper proposes the Wide and Deep GNN (WD-GNN), a novel architecture that can be easily updated with distributed online learning mechanisms.
arXiv Detail & Related papers (2020-06-11T12:48:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.