Neural Motion Planning for Autonomous Parking
- URL: http://arxiv.org/abs/2111.06739v2
- Date: Tue, 16 Nov 2021 06:46:22 GMT
- Title: Neural Motion Planning for Autonomous Parking
- Authors: Dongchan Kim and Kunsoo Huh
- Abstract summary: This paper presents a hybrid motion planning strategy that combines a deep generative network with a conventional motion planning method.
The proposed method effectively learns the representations of a given state, and shows improvement in terms of algorithm performance.
- Score: 6.1805402105389895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a hybrid motion planning strategy that combines a deep
generative network with a conventional motion planning method. Existing
planning methods such as A* and Hybrid A* are widely used in path planning
tasks because of their ability to determine feasible paths even in complex
environments; however, they have limitations in terms of efficiency. To
overcome these limitations, a path planning algorithm based on a neural
network, namely the neural Hybrid A*, is introduced. This paper proposes using
a conditional variational autoencoder (CVAE) to guide the search algorithm by
exploiting the ability of CVAE to learn information about the planning space
given the information of the parking environment. A non-uniform expansion
strategy is utilized based on a distribution of feasible trajectories learned
in the demonstrations. The proposed method effectively learns the
representations of a given state, and shows improvement in terms of algorithm
performance.
Related papers
- Potential Based Diffusion Motion Planning [73.593988351275]
We propose a new approach towards learning potential based motion planning.
We train a neural network to capture and learn an easily optimizable potentials over motion planning trajectories.
We demonstrate its inherent composability, enabling us to generalize to a multitude of different motion constraints.
arXiv Detail & Related papers (2024-07-08T17:48:39Z) - LLM-A*: Large Language Model Enhanced Incremental Heuristic Search on Path Planning [91.95362946266577]
Path planning is a fundamental scientific problem in robotics and autonomous navigation.
Traditional algorithms like A* and its variants are capable of ensuring path validity but suffer from significant computational and memory inefficiencies as the state space grows.
We propose a new LLM based route planning method that synergistically combines the precise pathfinding capabilities of A* with the global reasoning capability of LLMs.
This hybrid approach aims to enhance pathfinding efficiency in terms of time and space complexity while maintaining the integrity of path validity, especially in large-scale scenarios.
arXiv Detail & Related papers (2024-06-20T01:24:30Z) - Planning as In-Painting: A Diffusion-Based Embodied Task Planning
Framework for Environments under Uncertainty [56.30846158280031]
Task planning for embodied AI has been one of the most challenging problems.
We propose a task-agnostic method named 'planning as in-painting'
The proposed framework achieves promising performances in various embodied AI tasks.
arXiv Detail & Related papers (2023-12-02T10:07:17Z) - Learning Coverage Paths in Unknown Environments with Deep Reinforcement Learning [17.69984142788365]
Coverage path planning ( CPP) is the problem of finding a path that covers the entire free space of a confined area.
We investigate how suitable reinforcement learning is for this challenging problem.
We propose a computationally feasible egocentric map representation based on frontiers, and a novel reward term based on total variation.
arXiv Detail & Related papers (2023-06-29T14:32:06Z) - Integration of Reinforcement Learning Based Behavior Planning With
Sampling Based Motion Planning for Automated Driving [0.5801044612920815]
We propose a method to employ a trained deep reinforcement learning policy for dedicated high-level behavior planning.
To the best of our knowledge, this work is the first to apply deep reinforcement learning in this manner.
arXiv Detail & Related papers (2023-04-17T13:49:55Z) - Optimal Solving of Constrained Path-Planning Problems with Graph
Convolutional Networks and Optimized Tree Search [12.457788665461312]
We propose a hybrid solving planner that combines machine learning models and an optimal solver.
We conduct experiments on realistic scenarios and show that GCN support enables substantial speedup and smoother scaling to harder problems.
arXiv Detail & Related papers (2021-08-02T16:53:21Z) - Trajectory Design for UAV-Based Internet-of-Things Data Collection: A
Deep Reinforcement Learning Approach [93.67588414950656]
In this paper, we investigate an unmanned aerial vehicle (UAV)-assisted Internet-of-Things (IoT) system in a 3D environment.
We present a TD3-based trajectory design for completion time minimization (TD3-TDCTM) algorithm.
Our simulation results show the superiority of the proposed TD3-TDCTM algorithm over three conventional non-learning based baseline methods.
arXiv Detail & Related papers (2021-07-23T03:33:29Z) - Planning for Novelty: Width-Based Algorithms for Common Problems in
Control, Planning and Reinforcement Learning [6.053629733936546]
Width-based algorithms search for solutions through a general definition of state novelty.
These algorithms have been shown to result in state-of-the-art performance in classical planning.
arXiv Detail & Related papers (2021-06-09T07:46:19Z) - Waypoint Planning Networks [66.72790309889432]
We propose a hybrid algorithm based on LSTMs with a local kernel - a classic algorithm such as A*, and a global kernel using a learned algorithm.
We compare WPN against A*, as well as related works including motion planning networks (MPNet) and value networks (VIN)
It is shown that WPN's search space is considerably less than A*, while being able to generate near optimal results.
arXiv Detail & Related papers (2021-05-01T18:02:01Z) - Experience-Based Heuristic Search: Robust Motion Planning with Deep
Q-Learning [0.0]
We show how experiences in the form of a Deep Q-Network can be integrated as optimal policy in a search algorithm.
Our method may encourage further investigation of the applicability of reinforcement-learning-based planning in the field of self-driving vehicles.
arXiv Detail & Related papers (2021-02-05T12:08:11Z) - Decentralized MCTS via Learned Teammate Models [89.24858306636816]
We present a trainable online decentralized planning algorithm based on decentralized Monte Carlo Tree Search.
We show that deep learning and convolutional neural networks can be employed to produce accurate policy approximators.
arXiv Detail & Related papers (2020-03-19T13:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.