ParkingE2E: Camera-based End-to-end Parking Network, from Images to Planning
- URL: http://arxiv.org/abs/2408.02061v1
- Date: Sun, 4 Aug 2024 15:20:39 GMT
- Title: ParkingE2E: Camera-based End-to-end Parking Network, from Images to Planning
- Authors: Changze Li, Ziheng Ji, Zhe Chen, Tong Qin, Ming Yang,
- Abstract summary: Traditional parking algorithms are usually implemented using rule-based schemes.
Neural-network-based methods tend to be more intuitive and versatile than the rule-based methods.
In this paper, we employ imitation learning to perform end-to-end planning from RGB images to path planning by imitating human driving trajectories.
The proposed method achieved an average parking success rate of 87.8% across four different real-world garages.
- Score: 7.034120265476802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous parking is a crucial task in the intelligent driving field. Traditional parking algorithms are usually implemented using rule-based schemes. However, these methods are less effective in complex parking scenarios due to the intricate design of the algorithms. In contrast, neural-network-based methods tend to be more intuitive and versatile than the rule-based methods. By collecting a large number of expert parking trajectory data and emulating human strategy via learning-based methods, the parking task can be effectively addressed. In this paper, we employ imitation learning to perform end-to-end planning from RGB images to path planning by imitating human driving trajectories. The proposed end-to-end approach utilizes a target query encoder to fuse images and target features, and a transformer-based decoder to autoregressively predict future waypoints. We conducted extensive experiments in real-world scenarios, and the results demonstrate that the proposed method achieved an average parking success rate of 87.8% across four different real-world garages. Real-vehicle experiments further validate the feasibility and effectiveness of the method proposed in this paper.
Related papers
- HOPE: A Reinforcement Learning-based Hybrid Policy Path Planner for Diverse Parking Scenarios [24.25807334214834]
We introduce Hybrid pOlicy Path plannEr (HOPE) to handle diverse and complex parking scenarios.
HOPE integrates a reinforcement learning agent with Reeds-Shepp curves, enabling effective planning across diverse scenarios.
We propose a criterion for categorizing the difficulty level of parking scenarios based on space and obstacle distribution.
arXiv Detail & Related papers (2024-05-31T02:17:51Z) - Automated Parking Planning with Vision-Based BEV Approach [10.936433798200907]
This paper proposes an improved automated parking algorithm based on the A* algorithm, integrating vehicle kinematic models, function optimization, bidirectional search, and Bezier curve optimization.
Compared to traditional algorithms, this approach demonstrates reduced computation time with more challenging collision-risk test cases and improved performance in comfort metrics.
arXiv Detail & Related papers (2024-05-24T15:26:09Z) - Integration of Reinforcement Learning Based Behavior Planning With
Sampling Based Motion Planning for Automated Driving [0.5801044612920815]
We propose a method to employ a trained deep reinforcement learning policy for dedicated high-level behavior planning.
To the best of our knowledge, this work is the first to apply deep reinforcement learning in this manner.
arXiv Detail & Related papers (2023-04-17T13:49:55Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Model-based Decision Making with Imagination for Autonomous Parking [50.41076449007115]
The proposed algorithm consists of three parts: an imaginative model for anticipating results before parking, an improved rapid-exploring random tree (RRT) and a path smoothing module.
Our algorithm is based on a real kinematic vehicle model; which makes it more suitable for algorithm application on real autonomous cars.
In order to evaluate the algorithm's effectiveness, we have compared our algorithm with traditional RRT, within three different parking scenarios.
arXiv Detail & Related papers (2021-08-25T18:24:34Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Real-world Ride-hailing Vehicle Repositioning using Deep Reinforcement
Learning [52.2663102239029]
We present a new practical framework based on deep reinforcement learning and decision-time planning for real-world vehicle on idle-hailing platforms.
Our approach learns ride-based state-value function using a batch training algorithm with deep value.
We benchmark our algorithm with baselines in a ride-hailing simulation environment to demonstrate its superiority in improving income efficiency.
arXiv Detail & Related papers (2021-03-08T05:34:05Z) - Experience-Based Heuristic Search: Robust Motion Planning with Deep
Q-Learning [0.0]
We show how experiences in the form of a Deep Q-Network can be integrated as optimal policy in a search algorithm.
Our method may encourage further investigation of the applicability of reinforcement-learning-based planning in the field of self-driving vehicles.
arXiv Detail & Related papers (2021-02-05T12:08:11Z) - The Importance of Prior Knowledge in Precise Multimodal Prediction [71.74884391209955]
Roads have well defined geometries, topologies, and traffic rules.
In this paper we propose to incorporate structured priors as a loss function.
We demonstrate the effectiveness of our approach on real-world self-driving datasets.
arXiv Detail & Related papers (2020-06-04T03:56:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.