Multi-Robot Active Mapping via Neural Bipartite Graph Matching
- URL: http://arxiv.org/abs/2203.16319v2
- Date: Fri, 1 Apr 2022 11:03:23 GMT
- Title: Multi-Robot Active Mapping via Neural Bipartite Graph Matching
- Authors: Kai Ye, Siyan Dong, Qingnan Fan, He Wang, Li Yi, Fei Xia, Jue Wang,
Baoquan Chen
- Abstract summary: We study the problem of multi-robot active mapping, which aims for complete scene map construction in minimum time steps.
The key to this problem lies in the goal position estimation to enable more efficient robot movements.
We propose a novel algorithm, namely NeuralCoMapping, which takes advantage of both approaches.
- Score: 49.72892929603187
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of multi-robot active mapping, which aims for complete
scene map construction in minimum time steps. The key to this problem lies in
the goal position estimation to enable more efficient robot movements. Previous
approaches either choose the frontier as the goal position via a myopic
solution that hinders the time efficiency, or maximize the long-term value via
reinforcement learning to directly regress the goal position, but does not
guarantee the complete map construction. In this paper, we propose a novel
algorithm, namely NeuralCoMapping, which takes advantage of both approaches. We
reduce the problem to bipartite graph matching, which establishes the node
correspondences between two graphs, denoting robots and frontiers. We introduce
a multiplex graph neural network (mGNN) that learns the neural distance to fill
the affinity matrix for more effective graph matching. We optimize the mGNN
with a differentiable linear assignment layer by maximizing the long-term
values that favor time efficiency and map completeness via reinforcement
learning. We compare our algorithm with several state-of-the-art multi-robot
active mapping approaches and adapted reinforcement-learning baselines.
Experimental results demonstrate the superior performance and exceptional
generalization ability of our algorithm on various indoor scenes and unseen
number of robots, when only trained with 9 indoor scenes.
Related papers
- Bigraph Matching Weighted with Learnt Incentive Function for Multi-Robot
Task Allocation [5.248564173595024]
This paper develops a Graph Reinforcement Learning framework to learn the robustnesss or incentives for a bipartite graph matching approach to Multi-Robot Task Allocation.
The performance of this new bigraph matching approach augmented with a GRL-derived incentive is found to be at par with the original bigraph matching approach.
arXiv Detail & Related papers (2024-03-11T19:55:08Z) - Learning the Geodesic Embedding with Graph Neural Networks [22.49236293942187]
We present GeGnn, a learning-based method for computing the approximate geodesic distance between two arbitrary points on discrete polyhedra surfaces.
Our key idea is to train a graph neural network to embed an input mesh into a high-dimensional embedding space.
We verify the efficiency and effectiveness of our method on ShapeNet and demonstrate that our method is faster than existing methods by orders of magnitude.
arXiv Detail & Related papers (2023-09-11T16:54:34Z) - ReVoLT: Relational Reasoning and Voronoi Local Graph Planning for
Target-driven Navigation [1.0896567381206714]
Embodied AI is an inevitable trend that emphasizes the interaction between intelligent entities and the real world.
Recent works focus on exploiting layout relationships by graph neural networks (GNNs)
We decouple this task and propose ReVoLT, a hierarchical framework.
arXiv Detail & Related papers (2023-01-06T05:19:56Z) - Learning Graph Search Heuristics [48.83557172525969]
We present PHIL (Path Heuristic with Imitation Learning), a novel neural architecture and a training algorithm for discovering graph search and navigations from data.
Our function learns graph embeddings useful for inferring node distances, runs in constant time independent of graph sizes, and can be easily incorporated in an algorithm such as A* at test time.
Experiments show that PHIL reduces the number of explored nodes compared to state-of-the-art methods on benchmark datasets by 58.5% on average.
arXiv Detail & Related papers (2022-12-07T22:28:00Z) - A Unified Framework for Implicit Sinkhorn Differentiation [58.56866763433335]
We propose an algorithm that obtains analytical gradients of a Sinkhorn layer via implicit differentiation.
We show that it is computationally more efficient, particularly when resources like GPU memory are scarce.
arXiv Detail & Related papers (2022-05-13T14:45:31Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Neural Weighted A*: Learning Graph Costs and Heuristics with
Differentiable Anytime A* [12.117737635879037]
Recent works related to data-driven planning aim at learning either cost functions or planner functions, but not both.
We propose Neural Weighted A*, a differentiable anytime planner able to produce improved representations of planar maps as graph costs and planners.
We experimentally show the validity of our claims by testing Neural Weighted A* against several baselines, introducing a novel, tile-based navigation dataset.
arXiv Detail & Related papers (2021-05-04T13:17:30Z) - Learnable Graph Matching: Incorporating Graph Partitioning with Deep
Feature Learning for Multiple Object Tracking [58.30147362745852]
Data association across frames is at the core of Multiple Object Tracking (MOT) task.
Existing methods mostly ignore the context information among tracklets and intra-frame detections.
We propose a novel learnable graph matching method to address these issues.
arXiv Detail & Related papers (2021-03-30T08:58:45Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.