Choosing a Classical Planner with Graph Neural Networks
- URL: http://arxiv.org/abs/2402.04874v1
- Date: Thu, 25 Jan 2024 13:04:27 GMT
- Title: Choosing a Classical Planner with Graph Neural Networks
- Authors: Jana Vatter, Ruben Mayer, Hans-Arno Jacobsen, Horst Samulowitz,
Michael Katz
- Abstract summary: We show the effectiveness of a variety of GNN-based online planner selection methods.
We propose using the graph representation obtained by a GNN as an input to the Extreme Gradient Boosting (XGBoost) model.
- Score: 23.36706049948896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online planner selection is the task of choosing a solver out of a predefined
set for a given planning problem. As planning is computationally hard, the
performance of solvers varies greatly on planning problems. Thus, the ability
to predict their performance on a given problem is of great importance. While a
variety of learning methods have been employed, for classical cost-optimal
planning the prevailing approach uses Graph Neural Networks (GNNs). In this
work, we continue the line of work on using GNNs for online planner selection.
We perform a thorough investigation of the impact of the chosen GNN model,
graph representation and node features, as well as prediction task. Going
further, we propose using the graph representation obtained by a GNN as an
input to the Extreme Gradient Boosting (XGBoost) model, resulting in a more
resource-efficient yet accurate approach. We show the effectiveness of a
variety of GNN-based online planner selection methods, opening up new exciting
avenues for research on online planner selection.
Related papers
- Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - Search to Fine-tune Pre-trained Graph Neural Networks for Graph-level
Tasks [22.446655655309854]
Graph neural networks (GNNs) have shown their unprecedented success in many graph-related tasks.
Recent efforts try to pre-train GNNs on a large-scale unlabeled graph and adapt the knowledge from the unlabeled graph to the target downstream task.
Despite the importance of fine-tuning, current GNNs pre-training works often ignore designing a good fine-tuning strategy.
arXiv Detail & Related papers (2023-08-14T06:32:02Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - GNNInterpreter: A Probabilistic Generative Model-Level Explanation for
Graph Neural Networks [25.94529851210956]
We propose a model-agnostic model-level explanation method for different Graph Neural Networks (GNNs) that follow the message passing scheme, GNNInterpreter.
GNNInterpreter learns a probabilistic generative graph distribution that produces the most discriminative graph pattern the GNN tries to detect.
Compared to existing works, GNNInterpreter is more flexible and computationally efficient in generating explanation graphs with different types of node and edge features.
arXiv Detail & Related papers (2022-09-15T07:45:35Z) - Robust Reinforcement Learning on Graphs for Logistics optimization [0.0]
We have analyzed the most recent results in both fields and selected SOTA algorithms from graph neural networks and reinforcement learning.
Our team compared three algorithms - GAT, Pro-CNN and PTDNet.
We achieved SOTA results on AMOD systems optimization problem employing PTDNet with GNN and training them in reinforcement fashion.
arXiv Detail & Related papers (2022-05-25T16:16:28Z) - GPN: A Joint Structural Learning Framework for Graph Neural Networks [36.38529113603987]
We propose a GNN-based joint learning framework that simultaneously learns the graph structure and the downstream task.
Our method is the first GNN-based bilevel optimization framework for resolving this task.
arXiv Detail & Related papers (2022-05-12T09:06:04Z) - Graph Neural Networks with Local Graph Parameters [1.8600631687568656]
Local graph parameters can be added to any Graph Neural Networks (GNNs) architecture.
Our results connect GNNs with deep results in finite model theory and finite variable logics.
arXiv Detail & Related papers (2021-06-12T07:43:51Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Learning to Execute Programs with Instruction Pointer Attention Graph
Neural Networks [55.98291376393561]
Graph neural networks (GNNs) have emerged as a powerful tool for learning software engineering tasks.
Recurrent neural networks (RNNs) are well-suited to long sequential chains of reasoning, but they do not naturally incorporate program structure.
We introduce a novel GNN architecture, the Instruction Pointer Attention Graph Neural Networks (IPA-GNN), which improves systematic generalization on the task of learning to execute programs.
arXiv Detail & Related papers (2020-10-23T19:12:30Z) - Graph Neural Networks for Motion Planning [108.51253840181677]
We present two techniques, GNNs over dense fixed graphs for low-dimensional problems and sampling-based GNNs for high-dimensional problems.
We examine the ability of a GNN to tackle planning problems such as identifying critical nodes or learning the sampling distribution in Rapidly-exploring Random Trees (RRT)
Experiments with critical sampling, a pendulum and a six DoF robot arm show GNNs improve on traditional analytic methods as well as learning approaches using fully-connected or convolutional neural networks.
arXiv Detail & Related papers (2020-06-11T08:19:06Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.