Continuous Neural Algorithmic Planners
- URL: http://arxiv.org/abs/2211.15839v1
- Date: Tue, 29 Nov 2022 00:19:35 GMT
- Title: Continuous Neural Algorithmic Planners
- Authors: Yu He, Petar Veli\v{c}kovi\'c, Pietro Li\`o, Andreea Deac
- Abstract summary: XLVIN is a graph neural network that simulates the value algorithm in deep reinforcement learning agents.
It allows model-free iteration planning without access to privileged information about the environment.
We show how neural algorithmic reasoning can make a measurable impact in higher-dimensional continuous control settings.
- Score: 3.9715120586766584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural algorithmic reasoning studies the problem of learning algorithms with
neural networks, especially with graph architectures. A recent proposal, XLVIN,
reaps the benefits of using a graph neural network that simulates the value
iteration algorithm in deep reinforcement learning agents. It allows model-free
planning without access to privileged information about the environment, which
is usually unavailable. However, XLVIN only supports discrete action spaces,
and is hence nontrivially applicable to most tasks of real-world interest. We
expand XLVIN to continuous action spaces by discretization, and evaluate
several selective expansion policies to deal with the large planning graphs.
Our proposal, CNAP, demonstrates how neural algorithmic reasoning can make a
measurable impact in higher-dimensional continuous control settings, such as
MuJoCo, bringing gains in low-data settings and outperforming model-free
baselines.
Related papers
- Decision-focused Graph Neural Networks for Combinatorial Optimization [62.34623670845006]
An emerging strategy to tackle optimization problems involves the adoption of graph neural networks (GNNs) as an alternative to traditional algorithms.
Despite the growing popularity of GNNs and traditional algorithm solvers in the realm of CO, there is limited research on their integrated use and the correlation between them within an end-to-end framework.
We introduce a decision-focused framework that utilizes GNNs to address CO problems with auxiliary support.
arXiv Detail & Related papers (2024-06-05T22:52:27Z) - Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - Proximal Mean Field Learning in Shallow Neural Networks [0.4972323953932129]
We propose a custom learning algorithm for shallow neural networks with single hidden layer having infinite width.
We realize mean field learning as a computational algorithm, rather than as an analytical tool.
Our algorithm performs gradient descent of the free energy associated with the risk functional.
arXiv Detail & Related papers (2022-10-25T10:06:26Z) - Neural Algorithmic Reasoners are Implicit Planners [17.6650448492151]
We study the class of implicit planners inspired by value iteration.
Our method performs all planning computations in a high-dimensional latent space.
We empirically verify that XLVINs can closely align with value iteration.
arXiv Detail & Related papers (2021-10-11T17:29:20Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - XLVIN: eXecuted Latent Value Iteration Nets [17.535799331279417]
Value Iteration Networks (VINs) have emerged as a popular method to incorporate planning algorithms within deep reinforcement learning.
We propose XLVINs, which combine recent developments across contrastive self-supervised learning, graph representation learning and neural algorithmic reasoning.
arXiv Detail & Related papers (2020-10-25T16:04:30Z) - Graph neural induction of value iteration [22.582832003418826]
We propose a graph neural network (GNN) that executes the value iteration (VI) algorithm, across arbitrary environment models, with direct supervision on the intermediate steps of VI.
The results indicate that GNNs are able to model value iteration accurately, recovering favourable metrics and policies across a variety of out-of-distribution tests.
arXiv Detail & Related papers (2020-09-26T14:09:16Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.