Foresight of Graph Reinforcement Learning Latent Permutations Learnt by
Gumbel Sinkhorn Network
- URL: http://arxiv.org/abs/2110.12144v1
- Date: Sat, 23 Oct 2021 05:30:43 GMT
- Title: Foresight of Graph Reinforcement Learning Latent Permutations Learnt by
Gumbel Sinkhorn Network
- Authors: Tianqi Shen, Hong Zhang, Ding Yuan, Jiaping Xiao, Yifan Yang
- Abstract summary: We propose Gumbel Sinkhorn graph attention reinforcement learning, where a graph attention network highly represents the underlying graph topology structure of the multi-agent environment.
We show how our proposed graph reinforcement learning methodology outperforms existing methods in the PettingZoo multi-agent environment by learning latent permutations.
- Score: 9.316409848022108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vital importance has necessity to be attached to cooperation in multi-agent
environments, as a result of which some reinforcement learning algorithms
combined with graph neural networks have been proposed to understand the mutual
interplay between agents. However, highly complicated and dynamic multi-agent
environments require more ingenious graph neural networks, which can
comprehensively represent not only the graph topology structure but also
evolution process of the structure due to agents emerging, disappearing and
moving. To tackle these difficulties, we propose Gumbel Sinkhorn graph
attention reinforcement learning, where a graph attention network highly
represents the underlying graph topology structure of the multi-agent
environment, and can adapt to the dynamic topology structure of graph better
with the help of Gumbel Sinkhorn network by learning latent permutations.
Empirically, simulation results show how our proposed graph reinforcement
learning methodology outperforms existing methods in the PettingZoo multi-agent
environment by learning latent permutations.
Related papers
- Online Learning Of Expanding Graphs [14.952056744888916]
This paper addresses the problem of online network inference for expanding graphs from a stream of signals.
We introduce a strategy that enables different types of updates for nodes that just joined the network and for previously existing nodes.
arXiv Detail & Related papers (2024-09-13T09:20:42Z) - Graph Attention Inference of Network Topology in Multi-Agent Systems [0.0]
Our work introduces a novel machine learning-based solution that leverages the attention mechanism to predict future states of multi-agent systems.
The graph structure is then inferred from the strength of the attention values.
Our results demonstrate that the presented data-driven graph attention machine learning model can identify the network topology in multi-agent systems.
arXiv Detail & Related papers (2024-08-27T23:58:51Z) - Topological Neural Networks: Mitigating the Bottlenecks of Graph Neural
Networks via Higher-Order Interactions [1.994307489466967]
This work starts with a theoretical framework to reveal the impact of network's width, depth, and graph topology on the over-squashing phenomena in message-passing neural networks.
The work drifts towards, higher-order interactions and multi-relational inductive biases via Topological Neural Networks.
Inspired by Graph Attention Networks, two topological attention networks are proposed: Simplicial and Cell Attention Networks.
arXiv Detail & Related papers (2024-02-10T08:26:06Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Spectral Graph Convolutional Networks With Lifting-based Adaptive Graph
Wavelets [81.63035727821145]
Spectral graph convolutional networks (SGCNs) have been attracting increasing attention in graph representation learning.
We propose a novel class of spectral graph convolutional networks that implement graph convolutions with adaptive graph wavelets.
arXiv Detail & Related papers (2021-08-03T17:57:53Z) - Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks [79.28094304325116]
Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
arXiv Detail & Related papers (2020-11-14T11:09:51Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Ring Reservoir Neural Networks for Graphs [15.07984894938396]
Reservoir Computing models can play an important role in developing fruitful graph embeddings.
Our core proposal is based on shaping the organization of the hidden neurons to follow a ring topology.
Experimental results on graph classification tasks indicate that ring-reservoirs architectures enable particularly effective network configurations.
arXiv Detail & Related papers (2020-05-11T17:51:40Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.