Adaptive Decision Making at the Intersection for Autonomous Vehicles
Based on Skill Discovery
- URL: http://arxiv.org/abs/2207.11724v1
- Date: Sun, 24 Jul 2022 11:56:45 GMT
- Title: Adaptive Decision Making at the Intersection for Autonomous Vehicles
Based on Skill Discovery
- Authors: Xianqi He, Lin Yang, Chao Lu, Zirui Li, Jianwei Gong
- Abstract summary: In urban environments, the complex and uncertain intersection scenarios are challenging for autonomous driving.
To ensure safety, it is crucial to develop an adaptive decision making system that can handle the interaction with other vehicles.
We propose a hierarchical framework that can autonomously accumulate and reuse knowledge.
- Score: 13.134487965031667
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In urban environments, the complex and uncertain intersection scenarios are
challenging for autonomous driving. To ensure safety, it is crucial to develop
an adaptive decision making system that can handle the interaction with other
vehicles. Manually designed model-based methods are reliable in common
scenarios. But in uncertain environments, they are not reliable, so
learning-based methods are proposed, especially reinforcement learning (RL)
methods. However, current RL methods need retraining when the scenarios change.
In other words, current RL methods cannot reuse accumulated knowledge. They
forget learned knowledge when new scenarios are given. To solve this problem,
we propose a hierarchical framework that can autonomously accumulate and reuse
knowledge. The proposed method combines the idea of motion primitives (MPs)
with hierarchical reinforcement learning (HRL). It decomposes complex problems
into multiple basic subtasks to reduce the difficulty. The proposed method and
other baseline methods are tested in a challenging intersection scenario based
on the CARLA simulator. The intersection scenario contains three different
subtasks that can reflect the complexity and uncertainty of real traffic flow.
After offline learning and testing, the proposed method is proved to have the
best performance among all methods.
Related papers
- CIMRL: Combining IMitation and Reinforcement Learning for Safe Autonomous Driving [45.05135725542318]
IMitation and Reinforcement Learning (CIMRL) approach enables training driving policies in simulation through leveraging imitative motion priors and safety constraints.
By combining RL and imitation, we demonstrate our method achieves state-of-the-art results in closed loop simulation and real world driving benchmarks.
arXiv Detail & Related papers (2024-06-13T07:31:29Z) - HOPE: A Reinforcement Learning-based Hybrid Policy Path Planner for Diverse Parking Scenarios [24.25807334214834]
We introduce Hybrid pOlicy Path plannEr (HOPE) to handle diverse and complex parking scenarios.
HOPE integrates a reinforcement learning agent with Reeds-Shepp curves, enabling effective planning across diverse scenarios.
We propose a criterion for categorizing the difficulty level of parking scenarios based on space and obstacle distribution.
arXiv Detail & Related papers (2024-05-31T02:17:51Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - NeurIPS 2022 Competition: Driving SMARTS [60.948652154552136]
Driving SMARTS is a regular competition designed to tackle problems caused by the distribution shift in dynamic interaction contexts.
The proposed competition supports methodologically diverse solutions, such as reinforcement learning (RL) and offline learning methods.
arXiv Detail & Related papers (2022-11-14T17:10:53Z) - Learning Interactive Driving Policies via Data-driven Simulation [125.97811179463542]
Data-driven simulators promise high data-efficiency for driving policy learning.
Small underlying datasets often lack interesting and challenging edge cases for learning interactive driving.
We propose a simulation method that uses in-painted ado vehicles for learning robust driving policies.
arXiv Detail & Related papers (2021-11-23T20:14:02Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Behavior Planning at Urban Intersections through Hierarchical
Reinforcement Learning [25.50973559614565]
In this work, we propose a behavior planning structure based on reinforcement learning (RL) which is capable of performing autonomous vehicle behavior planning with a hierarchical structure in simulated urban environments.
Our algorithms can perform better than rule-based methods for elective decisions such as when to turn left between vehicles approaching from the opposite direction or possible lane-change when approaching an intersection due to lane blockage or delay in front of the ego car.
Results also show that the proposed method converges to an optimal policy faster than traditional RL methods.
arXiv Detail & Related papers (2020-11-09T19:23:26Z) - An Online Method for A Class of Distributionally Robust Optimization
with Non-Convex Objectives [54.29001037565384]
We propose a practical online method for solving a class of online distributionally robust optimization (DRO) problems.
Our studies demonstrate important applications in machine learning for improving the robustness of networks.
arXiv Detail & Related papers (2020-06-17T20:19:25Z) - Guided Uncertainty-Aware Policy Optimization: Combining Learning and
Model-Based Strategies for Sample-Efficient Policy Learning [75.56839075060819]
Traditional robotic approaches rely on an accurate model of the environment, a detailed description of how to perform the task, and a robust perception system to keep track of the current state.
reinforcement learning approaches can operate directly from raw sensory inputs with only a reward signal to describe the task, but are extremely sample-inefficient and brittle.
In this work, we combine the strengths of model-based methods with the flexibility of learning-based methods to obtain a general method that is able to overcome inaccuracies in the robotics perception/actuation pipeline.
arXiv Detail & Related papers (2020-05-21T19:47:05Z) - Lane-Merging Using Policy-based Reinforcement Learning and
Post-Optimization [0.0]
We combine policy-based reinforcement learning with local optimization to foster and synthesize the best of the two methodologies.
We evaluate the proposed method using lane-change scenarios with a varying number of vehicles.
arXiv Detail & Related papers (2020-03-06T12:57:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.