Learning to plan with uncertain topological maps
- URL: http://arxiv.org/abs/2007.05270v1
- Date: Fri, 10 Jul 2020 09:28:30 GMT
- Title: Learning to plan with uncertain topological maps
- Authors: Edward Beeching and Jilles Dibangoye and Olivier Simonin and Christian
Wolf
- Abstract summary: We train an agent to navigate in 3D environments using a hierarchical strategy including a graph based planner and a local policy.
By performing an empirical analysis of our method in simulated photo-realistic 3D environments, we demonstrate that the inclusion of visual features in the learned neural planner outperforms classical symbolic solutions for graph based planning.
- Score: 14.774543742438176
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We train an agent to navigate in 3D environments using a hierarchical
strategy including a high-level graph based planner and a local policy. Our
main contribution is a data driven learning based approach for planning under
uncertainty in topological maps, requiring an estimate of shortest paths in
valued graphs with a probabilistic structure. Whereas classical symbolic
algorithms achieve optimal results on noise-less topologies, or optimal results
in a probabilistic sense on graphs with probabilistic structure, we aim to show
that machine learning can overcome missing information in the graph by taking
into account rich high-dimensional node features, for instance visual
information available at each location of the map. Compared to purely learned
neural white box algorithms, we structure our neural model with an inductive
bias for dynamic programming based shortest path algorithms, and we show that a
particular parameterization of our neural model corresponds to the Bellman-Ford
algorithm. By performing an empirical analysis of our method in simulated
photo-realistic 3D environments, we demonstrate that the inclusion of visual
features in the learned neural planner outperforms classical symbolic solutions
for graph based planning.
Related papers
- The Map Equation Goes Neural: Mapping Network Flows with Graph Neural Networks [0.716879432974126]
Community detection is an essential tool for unsupervised data exploration and revealing the organisational structure of networked systems.
We consider the map equation, a popular information-theoretic objective function for unsupervised community detection, and express it in differentiable tensor form for gradient through descent.
Our formulation turns the map equation compatible with any neural network architecture, enables end-to-end learning, incorporates node features, and chooses the optimal number of clusters automatically.
arXiv Detail & Related papers (2023-10-02T12:32:18Z) - GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - Joint Feature and Differentiable $ k $-NN Graph Learning using Dirichlet
Energy [103.74640329539389]
We propose a deep FS method that simultaneously conducts feature selection and differentiable $ k $-NN graph learning.
We employ Optimal Transport theory to address the non-differentiability issue of learning $ k $-NN graphs in neural networks.
We validate the effectiveness of our model with extensive experiments on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-21T08:15:55Z) - Learning Graph Search Heuristics [48.83557172525969]
We present PHIL (Path Heuristic with Imitation Learning), a novel neural architecture and a training algorithm for discovering graph search and navigations from data.
Our function learns graph embeddings useful for inferring node distances, runs in constant time independent of graph sizes, and can be easily incorporated in an algorithm such as A* at test time.
Experiments show that PHIL reduces the number of explored nodes compared to state-of-the-art methods on benchmark datasets by 58.5% on average.
arXiv Detail & Related papers (2022-12-07T22:28:00Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Effective and efficient structure learning with pruning and model
averaging strategies [9.023722579074734]
This paper describes an approximate BN structure learning algorithm that combines two novel strategies with hill-climbing search.
The algorithm starts by pruning the search space graphs, where the pruning strategy can be viewed as an aggressive version of the pruning strategies.
It then performs model averaging in the hill-climbing search process and moves to the neighbouring graph that maximises the objective function.
arXiv Detail & Related papers (2021-12-01T10:35:34Z) - Learning to Learn Graph Topologies [27.782971146122218]
We learn a mapping from node data to the graph structure based on the idea of learning to optimise (L2O)
The model is trained in an end-to-end fashion with pairs of node data and graph samples.
Experiments on both synthetic and real-world data demonstrate that our model is more efficient than classic iterative algorithms in learning a graph with specific topological properties.
arXiv Detail & Related papers (2021-10-19T08:42:38Z) - Goal Agnostic Planning using Maximum Likelihood Paths in Hypergraph
World Models [1.370633147306388]
We present a hypergraph-based machine learning algorithm, a datastructure--driven maintenance method, and a planning algorithm based on a probabilistic application of Dijkstra's algorithm.
We prove that the algorithm determines optimal solutions within the problem space, mathematically bound learning performance, and supply a mathematical model analyzing system state progression through time.
arXiv Detail & Related papers (2021-10-18T16:22:33Z) - Hallucinative Topological Memory for Zero-Shot Visual Planning [86.20780756832502]
In visual planning (VP), an agent learns to plan goal-directed behavior from observations of a dynamical system obtained offline.
Most previous works on VP approached the problem by planning in a learned latent space, resulting in low-quality visual plans.
Here, we propose a simple VP method that plans directly in image space and displays competitive performance.
arXiv Detail & Related papers (2020-02-27T18:54:42Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.