Motion Planning Transformers: One Model to Plan Them All
- URL: http://arxiv.org/abs/2106.02791v1
- Date: Sat, 5 Jun 2021 04:29:16 GMT
- Title: Motion Planning Transformers: One Model to Plan Them All
- Authors: Jacob J. Johnson, Linjun Li, Ahmed H. Qureshi, and Michael C. Yip
- Abstract summary: We propose a transformer-based approach for efficiently solving the complex motion planning problems.
Our approach first identifies regions on the map using transformers to provide attention to map areas likely to include the best path, and then applies local planners to generate the final collision-free path.
- Score: 15.82728888674882
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transformers have become the powerhouse of natural language processing and
recently found use in computer vision tasks. Their effective use of attention
can be used in other contexts as well, and in this paper, we propose a
transformer-based approach for efficiently solving the complex motion planning
problems. Traditional neural network-based motion planning uses convolutional
networks to encode the planning space, but these methods are limited to fixed
map sizes, which is often not realistic in the real-world. Our approach first
identifies regions on the map using transformers to provide attention to map
areas likely to include the best path, and then applies local planners to
generate the final collision-free path. We validate our method on a variety of
randomly generated environments with different map sizes, demonstrating
reduction in planning complexity and achieving comparable accuracy to
traditional planners.
Related papers
- A Meta-Engine Framework for Interleaved Task and Motion Planning using Topological Refinements [51.54559117314768]
Task And Motion Planning (TAMP) is the problem of finding a solution to an automated planning problem.
We propose a general and open-source framework for modeling and benchmarking TAMP problems.
We introduce an innovative meta-technique to solve TAMP problems involving moving agents and multiple task-state-dependent obstacles.
arXiv Detail & Related papers (2024-08-11T14:57:57Z) - Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information [68.10033984296247]
This paper explores the domain of active localization, emphasizing the importance of viewpoint selection to enhance localization accuracy.
Our contributions involve using a data-driven approach with a simple architecture designed for real-time operation, a self-supervised data training method, and the capability to consistently integrate our map into a planning framework tailored for real-world robotics applications.
arXiv Detail & Related papers (2024-07-22T12:32:09Z) - Potential Based Diffusion Motion Planning [73.593988351275]
We propose a new approach towards learning potential based motion planning.
We train a neural network to capture and learn an easily optimizable potentials over motion planning trajectories.
We demonstrate its inherent composability, enabling us to generalize to a multitude of different motion constraints.
arXiv Detail & Related papers (2024-07-08T17:48:39Z) - Leveraging Scene Embeddings for Gradient-Based Motion Planning in Latent
Space [24.95320093765214]
AMP-LS is able to plan in novel, complex scenes while outperforming traditional planning baselines in terms of speed by an order of magnitude.
We show that the resulting system is fast enough to enable closed-loop planning in real-world dynamic scenes.
arXiv Detail & Related papers (2023-03-06T18:49:39Z) - PlanT: Explainable Planning Transformers via Object-Level
Representations [64.93938686101309]
PlanT is a novel approach for planning in the context of self-driving.
PlanT is based on imitation learning with a compact object-level input representation.
Our results indicate that PlanT can focus on the most relevant object in the scene, even when this object is geometrically distant.
arXiv Detail & Related papers (2022-10-25T17:59:46Z) - Differentiable Spatial Planning using Transformers [87.90709874369192]
We propose Spatial Planning Transformers (SPT), which given an obstacle map learns to generate actions by planning over long-range spatial dependencies.
In the setting where the ground truth map is not known to the agent, we leverage pre-trained SPTs in an end-to-end framework.
SPTs outperform prior state-of-the-art differentiable planners across all the setups for both manipulation and navigation tasks.
arXiv Detail & Related papers (2021-12-02T06:48:16Z) - Neural Motion Planning for Autonomous Parking [6.1805402105389895]
This paper presents a hybrid motion planning strategy that combines a deep generative network with a conventional motion planning method.
The proposed method effectively learns the representations of a given state, and shows improvement in terms of algorithm performance.
arXiv Detail & Related papers (2021-11-12T14:29:38Z) - Image Stitching Based on Planar Region Consensus [22.303750435673752]
We propose a new image stitching method which stitches images by allowing for the alignment of a set of matched dominant planar regions.
We use rich semantic information directly from RGB images to extract planar image regions with a deep Convolutional Neural Network (CNN)
Our method can deal with different situations and outperforms the state-of-the-arts on challenging scenes.
arXiv Detail & Related papers (2020-07-06T13:07:20Z) - Latent Space Roadmap for Visual Action Planning of Deformable and Rigid
Object Manipulation [74.88956115580388]
Planning is performed in a low-dimensional latent state space that embeds images.
Our framework consists of two main components: a Visual Foresight Module (VFM) that generates a visual plan as a sequence of images, and an Action Proposal Network (APN) that predicts the actions between them.
arXiv Detail & Related papers (2020-03-19T18:43:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.