Learning More Expressive General Policies for Classical Planning Domains
- URL: http://arxiv.org/abs/2403.11734v2
- Date: Tue, 18 Feb 2025 14:42:02 GMT
- Title: Learning More Expressive General Policies for Classical Planning Domains
- Authors: Simon Ståhlberg, Blai Bonet, Hector Geffner,
- Abstract summary: GNN-based approaches for learning general policies across planning domains are limited by the expressive power of $C express$.
This limitation can be overcome by transitioning to $k$-GNNs, for $k$=3, wherein object embeddings are substituted with triplet embeddings.
Experiments show clear gains of R-GNN[$1$] over the plain R-GNNs, also over Edge Transformers that also approximate $3$-GNNs.
- Score: 15.574717738100727
- License:
- Abstract: GNN-based approaches for learning general policies across planning domains are limited by the expressive power of $C_2$, namely; first-order logic with two variables and counting. This limitation can be overcame by transitioning to $k$-GNNs, for $k=3$, wherein object embeddings are substituted with triplet embeddings. Yet, while $3$-GNNs have the expressive power of $C_3$, unlike $1$- and $2$-GNNs that are confined to $C_2$, they require quartic time for message exchange and cubic space to store embeddings, rendering them infeasible in practice. In this work, we introduce a parameterized version R-GNN[$t$] (with parameter $t$) of Relational GNNs. Unlike GNNs, that are designed to perform computation on graphs, Relational GNNs are designed to do computation on relational structures. When $t=\infty$, R-GNN[$t$] approximates $3$-GNNs over graphs, but using only quadratic space for embeddings. For lower values of $t$, such as $t=1$ and $t=2$, R-GNN[$t$] achieves a weaker approximation by exchanging fewer messages, yet interestingly, often yield the expressivity required in several planning domains. Furthermore, the new R-GNN[$t$] architecture is the original R-GNN architecture with a suitable transformation applied to the inputs only. Experimental results illustrate the clear performance gains of R-GNN[$1$] over the plain R-GNNs, and also over Edge Transformers that also approximate $3$-GNNs.
Related papers
- Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Calibrate and Boost Logical Expressiveness of GNN Over Multi-Relational
and Temporal Graphs [8.095679736030146]
We investigate $mathcalFOC$NN, a fragment of first-order logic with two variables and counting quantifiers.
We propose a simple graph transformation technique, akin to a preprocessing step, which can be executed in linear time.
Our results consistently demonstrate that R$2$-GNN with the graph transformation outperforms the baseline methods on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-11-03T00:33:24Z) - On the Computational Complexity and Formal Hierarchy of Second Order
Recurrent Neural Networks [59.85314067235965]
We extend the theoretical foundation for the $2nd$-order recurrent network ($2nd$ RNN)
We prove there exists a class of a $2nd$ RNN that is Turing-complete with bounded time.
We also demonstrate that $2$nd order RNNs, without memory, outperform modern-day models such as vanilla RNNs and gated recurrent units in recognizing regular grammars.
arXiv Detail & Related papers (2023-09-26T06:06:47Z) - Distance-Restricted Folklore Weisfeiler-Leman GNNs with Provable Cycle
Counting Power [27.929167547127207]
Ability of graph neural networks (GNNs) to count certain graph substructures, especially cycles, is important.
We propose a novel class of GNNs -- $d$-Distance-Restricted FWL(2) GNNs.
Our model is the most efficient GNN model to date that can count up to 6-cycles.
arXiv Detail & Related papers (2023-09-10T06:13:29Z) - Extending the Design Space of Graph Neural Networks by Rethinking
Folklore Weisfeiler-Lehman [66.23316415757456]
Message passing neural networks (MPNNs) have emerged as the most popular framework of graph neural networks (GNNs) in recent years.
However, their expressive power is limited by the 1-dimensional Weisfeiler-Lehman (1-WL) test.
We propose an extension, $(k,t)$-FWL, which considers any equivariant set as neighbors instead of all nodes.
N$2$-GNN achieves record-breaking results on ZINC-Subset (0.059), outperforming previous SOTA results by 10.6%.
arXiv Detail & Related papers (2023-06-05T21:35:32Z) - Densely Connected $G$-invariant Deep Neural Networks with Signed
Permutation Representations [6.200483285433661]
We introduce and investigate, for finite groups $G$, $G$-invariant deep neural network ($G$-DNN) architectures with ReLU activation.
The preactivations of the $G$-DNNs are able to transform by emphsigned permutation representations (signed perm-reps) of $G$.
We show that there are far more admissible $G$-DNN architectures than those accessible with the concatenated ReLU'' activation function from the literature.
arXiv Detail & Related papers (2023-03-08T14:35:03Z) - Elastic Graph Neural Networks [32.805937635335255]
We introduce a family of GNNs (Elastic GNNs) based on $ell$ and $ell$-based graph smoothing.
In particular, we propose a novel and general message passing scheme into GNNs.
Experiments on semi-supervised learning tasks demonstrate that the proposed Elastic GNNs obtain better adaptivity on benchmark datasets.
arXiv Detail & Related papers (2021-07-05T01:36:01Z) - Graph Neural Networks with Local Graph Parameters [1.8600631687568656]
Local graph parameters can be added to any Graph Neural Networks (GNNs) architecture.
Our results connect GNNs with deep results in finite model theory and finite variable logics.
arXiv Detail & Related papers (2021-06-12T07:43:51Z) - Identity-aware Graph Neural Networks [63.6952975763946]
We develop a class of message passing Graph Neural Networks (ID-GNNs) with greater expressive power than the 1-WL test.
ID-GNN extends existing GNN architectures by inductively considering nodes' identities during message passing.
We show that transforming existing GNNs to ID-GNNs yields on average 40% accuracy improvement on challenging node, edge, and graph property prediction tasks.
arXiv Detail & Related papers (2021-01-25T18:59:01Z) - RNNs can generate bounded hierarchical languages with optimal memory [113.73133308478612]
We show that RNNs can efficiently generate bounded hierarchical languages that reflect the scaffolding of natural language syntax.
We introduce Dyck-($k$,$m$), the language of well-nested brackets (of $k$ types) and $m$-bounded nesting depth.
We prove that an RNN with $O(m log k)$ hidden units suffices, an exponential reduction in memory, by an explicit construction.
arXiv Detail & Related papers (2020-10-15T04:42:29Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.