CADRE: A Cascade Deep Reinforcement Learning Framework for Vision-based
Autonomous Urban Driving
- URL: http://arxiv.org/abs/2202.08557v2
- Date: Wed, 19 Apr 2023 15:24:35 GMT
- Title: CADRE: A Cascade Deep Reinforcement Learning Framework for Vision-based
Autonomous Urban Driving
- Authors: Yinuo Zhao, Kun Wu, Zhiyuan Xu, Zhengping Che, Qi Lu, Jian Tang, Chi
Harold Liu
- Abstract summary: Vision-based autonomous urban driving in dense traffic is quite challenging due to the complicated urban environment and the dynamics of the driving behaviors.
We present a novel CAscade Deep REinforcement learning framework, CADRE, to achieve model-free vision-based autonomous urban driving.
- Score: 43.269130988225605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision-based autonomous urban driving in dense traffic is quite challenging
due to the complicated urban environment and the dynamics of the driving
behaviors. Widely-applied methods either heavily rely on hand-crafted rules or
learn from limited human experience, which makes them hard to generalize to
rare but critical scenarios. In this paper, we present a novel CAscade Deep
REinforcement learning framework, CADRE, to achieve model-free vision-based
autonomous urban driving. In CADRE, to derive representative latent features
from raw observations, we first offline train a Co-attention Perception Module
(CoPM) that leverages the co-attention mechanism to learn the
inter-relationships between the visual and control information from a
pre-collected driving dataset. Cascaded by the frozen CoPM, we then present an
efficient distributed proximal policy optimization framework to online learn
the driving policy under the guidance of particularly designed reward
functions. We perform a comprehensive empirical study with the CARLA NoCrash
benchmark as well as specific obstacle avoidance scenarios in autonomous urban
driving tasks. The experimental results well justify the effectiveness of CADRE
and its superiority over the state-of-the-art by a wide margin.
Related papers
- Model-Based Reinforcement Learning with Isolated Imaginations [61.67183143982074]
We propose Iso-Dream++, a model-based reinforcement learning approach.
We perform policy optimization based on the decoupled latent imaginations.
This enables long-horizon visuomotor control tasks to benefit from isolating mixed dynamics sources in the wild.
arXiv Detail & Related papers (2023-03-27T02:55:56Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - GINK: Graph-based Interaction-aware Kinodynamic Planning via
Reinforcement Learning for Autonomous Driving [10.782043595405831]
There are many challenges in applying deep reinforcement learning (D) to autonomous driving in a structured environment such as an urban area.
In this paper, we suggest a new framework that effectively combines graph-based intention representation and reinforcement learning for dynamic planning.
The experiments show the state-of-the-art performance of our approach compared to the existing baselines.
arXiv Detail & Related papers (2022-06-03T10:37:25Z) - Differentiable Control Barrier Functions for Vision-based End-to-End
Autonomous Driving [100.57791628642624]
We introduce a safety guaranteed learning framework for vision-based end-to-end autonomous driving.
We design a learning system equipped with differentiable control barrier functions (dCBFs) that is trained end-to-end by gradient descent.
arXiv Detail & Related papers (2022-03-04T16:14:33Z) - Affordance-based Reinforcement Learning for Urban Driving [3.507764811554557]
We propose a deep reinforcement learning framework to learn optimal control policy using waypoints and low-dimensional visual representations.
We demonstrate that our agents when trained from scratch learn the tasks of lane-following, driving around inter-sections as well as stopping in front of other actors or traffic lights even in the dense traffic setting.
arXiv Detail & Related papers (2021-01-15T05:21:25Z) - Reinforcement Learning for Autonomous Driving with Latent State
Inference and Spatial-Temporal Relationships [46.965260791099986]
We show that explicitly inferring the latent state and encoding spatial-temporal relationships in a reinforcement learning framework can help address this difficulty.
We encode prior knowledge on the latent states of other drivers through a framework that combines the reinforcement learner with a supervised learner.
The proposed framework significantly improves performance in the context of navigating T-intersections compared with state-of-the-art baseline approaches.
arXiv Detail & Related papers (2020-11-09T08:55:12Z) - Behavioral decision-making for urban autonomous driving in the presence
of pedestrians using Deep Recurrent Q-Network [0.0]
Decision making for autonomous driving in urban environments is challenging due to the complexity of the road structure and the uncertainty in the behavior of diverse road users.
In this work, a deep reinforcement learning based decision-making approach for high-level driving behavior is proposed for urban environments in the presence of pedestrians.
The proposed method is evaluated for dense urban scenarios and compared with a rule-based approach and results show that the proposed DRQN based driving behavior decision maker outperforms the rule-based approach.
arXiv Detail & Related papers (2020-10-26T08:08:06Z) - Interpretable End-to-end Urban Autonomous Driving with Latent Deep
Reinforcement Learning [32.97789225998642]
We propose an interpretable deep reinforcement learning method for end-to-end autonomous driving.
A sequential latent environment model is introduced and learned jointly with the reinforcement learning process.
Our method is able to provide a better explanation of how the car reasons about the driving environment.
arXiv Detail & Related papers (2020-01-23T18:36:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.