Multi-task Learning with Attention for End-to-end Autonomous Driving
- URL: http://arxiv.org/abs/2104.10753v1
- Date: Wed, 21 Apr 2021 20:34:57 GMT
- Title: Multi-task Learning with Attention for End-to-end Autonomous Driving
- Authors: Keishi Ishihara, Anssi Kanervisto, Jun Miura, Ville Hautam\"aki
- Abstract summary: We propose a novel multi-task attention-aware network in the conditional imitation learning framework.
This does not only improve the success rate of standard benchmarks, but also the ability to react to traffic lights.
- Score: 5.612688040565424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous driving systems need to handle complex scenarios such as lane
following, avoiding collisions, taking turns, and responding to traffic
signals. In recent years, approaches based on end-to-end behavioral cloning
have demonstrated remarkable performance in point-to-point navigational
scenarios, using a realistic simulator and standard benchmarks. Offline
imitation learning is readily available, as it does not require expensive hand
annotation or interaction with the target environment, but it is difficult to
obtain a reliable system. In addition, existing methods have not specifically
addressed the learning of reaction for traffic lights, which are a rare
occurrence in the training datasets. Inspired by the previous work on
multi-task learning and attention modeling, we propose a novel multi-task
attention-aware network in the conditional imitation learning (CIL) framework.
This does not only improve the success rate of standard benchmarks, but also
the ability to react to traffic lights, which we show with standard benchmarks.
Related papers
- End-to-End Steering for Autonomous Vehicles via Conditional Imitation Co-Learning [1.5020330976600735]
This work introduces the conditional imitation co-learning (CIC) approach to address this issue.
We propose posing the steering regression problem as classification, we use a classification-regression hybrid loss to bridge the gap between regression and classification.
Our model is demonstrated to improve autonomous driving success rate in unseen environment by 62% on average compared to the CIL method.
arXiv Detail & Related papers (2024-11-25T06:37:48Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Visual Exemplar Driven Task-Prompting for Unified Perception in
Autonomous Driving [100.3848723827869]
We present an effective multi-task framework, VE-Prompt, which introduces visual exemplars via task-specific prompting.
Specifically, we generate visual exemplars based on bounding boxes and color-based markers, which provide accurate visual appearances of target categories.
We bridge transformer-based encoders and convolutional layers for efficient and accurate unified perception in autonomous driving.
arXiv Detail & Related papers (2023-03-03T08:54:06Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Learning Interactive Driving Policies via Data-driven Simulation [125.97811179463542]
Data-driven simulators promise high data-efficiency for driving policy learning.
Small underlying datasets often lack interesting and challenging edge cases for learning interactive driving.
We propose a simulation method that uses in-painted ado vehicles for learning robust driving policies.
arXiv Detail & Related papers (2021-11-23T20:14:02Z) - DQ-GAT: Towards Safe and Efficient Autonomous Driving with Deep
Q-Learning and Graph Attention Networks [12.714551756377265]
Traditional planning methods are largely rule-based and scale poorly in complex dynamic scenarios.
We propose DQ-GAT to achieve scalable and proactive autonomous driving.
Our method can better trade-off safety and efficiency in both seen and unseen scenarios.
arXiv Detail & Related papers (2021-08-11T04:55:23Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Affordance-based Reinforcement Learning for Urban Driving [3.507764811554557]
We propose a deep reinforcement learning framework to learn optimal control policy using waypoints and low-dimensional visual representations.
We demonstrate that our agents when trained from scratch learn the tasks of lane-following, driving around inter-sections as well as stopping in front of other actors or traffic lights even in the dense traffic setting.
arXiv Detail & Related papers (2021-01-15T05:21:25Z) - Deep Surrogate Q-Learning for Autonomous Driving [17.30342128504405]
We propose Surrogate Q-learning for learning lane-change behavior for autonomous driving.
We show that the architecture leads to a novel replay sampling technique we call Scene-centric Experience Replay.
We also show that our methods enhance real-world applicability of RL systems by learning policies on the real highD dataset.
arXiv Detail & Related papers (2020-10-21T19:49:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.