Graph Convolutional Memory for Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2106.14117v1
- Date: Sun, 27 Jun 2021 00:22:51 GMT
- Title: Graph Convolutional Memory for Deep Reinforcement Learning
- Authors: Steven D. Morad, Stephan Liwicki, Amanda Prorok
- Abstract summary: We present graph convolutional memory (GCM) for solving POMDPs using deep reinforcement learning.
Unlike recurrent neural networks (RNNs) or transformers, GCM embeds domain-specific priors into the memory recall process via a knowledge graph.
Using graph convolutions, GCM extracts hierarchical graph features, analogous to image features in a convolutional neural network (CNN)
- Score: 8.229775890542967
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Solving partially-observable Markov decision processes (POMDPs) is critical
when applying deep reinforcement learning (DRL) to real-world robotics
problems, where agents have an incomplete view of the world. We present graph
convolutional memory (GCM) for solving POMDPs using deep reinforcement
learning. Unlike recurrent neural networks (RNNs) or transformers, GCM embeds
domain-specific priors into the memory recall process via a knowledge graph. By
encapsulating priors in the graph, GCM adapts to specific tasks but remains
applicable to any DRL task. Using graph convolutions, GCM extracts hierarchical
graph features, analogous to image features in a convolutional neural network
(CNN). We show GCM outperforms long short-term memory (LSTM), gated
transformers for reinforcement learning (GTrXL), and differentiable neural
computers (DNCs) on control, long-term non-sequential recall, and 3D navigation
tasks while using significantly fewer parameters.
Related papers
- Can Graph Reordering Speed Up Graph Neural Network Training? An Experimental Study [13.354505458409957]
Graph neural networks (GNNs) are capable of learning on graph-structured data.
The sparsity of graphs results in suboptimal memory access patterns and longer training time.
We show that graph reordering is effective in reducing training time for CPU- and GPU-based training.
arXiv Detail & Related papers (2024-09-17T12:28:02Z) - Enhancing Length Extrapolation in Sequential Models with Pointer-Augmented Neural Memory [66.88278207591294]
We propose Pointer-Augmented Neural Memory (PANM) to help neural networks understand and apply symbol processing to new, longer sequences of data.
PANM integrates an external neural memory that uses novel physical addresses and pointer manipulation techniques to mimic human and computer symbol processing abilities.
arXiv Detail & Related papers (2024-04-18T03:03:46Z) - Layer-wise training for self-supervised learning on graphs [0.0]
End-to-end training of graph neural networks (GNN) on large graphs presents several memory and computational challenges.
We propose Layer-wise Regularized Graph Infomax, an algorithm to train GNNs layer by layer in a self-supervised manner.
arXiv Detail & Related papers (2023-09-04T10:23:39Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction [50.248694764703714]
Unrolled neural networks have recently achieved state-of-the-art accelerated MRI reconstruction.
These networks unroll iterative optimization algorithms by alternating between physics-based consistency and neural-network based regularization.
We propose Greedy LEarning for Accelerated MRI reconstruction, an efficient training strategy for high-dimensional imaging settings.
arXiv Detail & Related papers (2022-07-18T06:01:29Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.