My Body is a Cage: the Role of Morphology in Graph-Based Incompatible
Control
- URL: http://arxiv.org/abs/2010.01856v2
- Date: Wed, 14 Apr 2021 09:48:02 GMT
- Title: My Body is a Cage: the Role of Morphology in Graph-Based Incompatible
Control
- Authors: Vitaly Kurin, Maximilian Igl, Tim Rockt\"aschel, Wendelin Boehmer,
Shimon Whiteson
- Abstract summary: We present a series of ablations on existing methods that show that morphological information encoded in the graph does not improve their performance.
Motivated by the hypothesis that any benefits GNNs extract from the graph structure are outweighed by difficulties they create for message passing, we also propose Amorpheus.
- Score: 65.77164390203396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multitask Reinforcement Learning is a promising way to obtain models with
better performance, generalisation, data efficiency, and robustness. Most
existing work is limited to compatible settings, where the state and action
space dimensions are the same across tasks. Graph Neural Networks (GNN) are one
way to address incompatible environments, because they can process graphs of
arbitrary size. They also allow practitioners to inject biases encoded in the
structure of the input graph. Existing work in graph-based continuous control
uses the physical morphology of the agent to construct the input graph, i.e.,
encoding limb features as node labels and using edges to connect the nodes if
their corresponded limbs are physically connected. In this work, we present a
series of ablations on existing methods that show that morphological
information encoded in the graph does not improve their performance. Motivated
by the hypothesis that any benefits GNNs extract from the graph structure are
outweighed by difficulties they create for message passing, we also propose
Amorpheus, a transformer-based approach. Further results show that, while
Amorpheus ignores the morphological information that GNNs encode, it
nonetheless substantially outperforms GNN-based methods that use the
morphological information to define the message-passing scheme.
Related papers
- Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - GRAFENNE: Learning on Graphs with Heterogeneous and Dynamic Feature Sets [19.71442902979904]
Graph neural networks (GNNs) are built on the assumption of a static set of features characterizing each node in a graph.
In this work, we address limitations through a novel GNN framework called GRAFENNE.
We prove that GRAFENNE is at least as expressive as any of the existing message-passing GNNs in terms of Weisfeiler-Leman tests.
arXiv Detail & Related papers (2023-06-06T07:00:24Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Graph Neural Networks with Feature and Structure Aware Random Walk [7.143879014059894]
We show that in typical heterphilous graphs, the edges may be directed, and whether to treat the edges as is or simply make them undirected greatly affects the performance of the GNN models.
We develop a model that adaptively learns the directionality of the graph, and exploits the underlying long-distance correlations between nodes.
arXiv Detail & Related papers (2021-11-19T08:54:21Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.