Online Adaptation of Parameters using GRU-based Neural Network with BO
for Accurate Driving Model
- URL: http://arxiv.org/abs/2109.11720v1
- Date: Fri, 24 Sep 2021 03:07:12 GMT
- Title: Online Adaptation of Parameters using GRU-based Neural Network with BO
for Accurate Driving Model
- Authors: Zhanhong Yang, Satoshi Masuda, Michiaki Tatsubori
- Abstract summary: Calibrating a driving model (DM) makes the simulated driving behavior closer to human-driving behavior.
Conventional DM-calibrating methods do not take into account that the parameters in a DM vary while driving.
We propose a DM-calibration method for measuring human driving styles to reproduce real car-following behavior more accurately.
- Score: 0.8433000039153409
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Testing self-driving cars in different areas requires surrounding cars with
accordingly different driving styles such as aggressive or conservative styles.
A method of numerically measuring and differentiating human driving styles to
create a virtual driver with a certain driving style is in demand. However,
most methods for measuring human driving styles require thresholds or labels to
classify the driving styles, and some require additional questionnaires for
drivers about their driving attitude. These limitations are not suitable for
creating a large virtual testing environment. Driving models (DMs) simulate
human driving styles. Calibrating a DM makes the simulated driving behavior
closer to human-driving behavior, and enable the simulation of human-driving
cars. Conventional DM-calibrating methods do not take into account that the
parameters in a DM vary while driving. These "fixed" calibrating methods cannot
reflect an actual interactive driving scenario. In this paper, we propose a
DM-calibration method for measuring human driving styles to reproduce real
car-following behavior more accurately. The method includes 1) an objective
entropy weight method for measuring and clustering human driving styles, and 2)
online adaption of DM parameters based on deep learning by combining Bayesian
optimization (BO) and a gated recurrent unit neural network. We conducted
experiments to evaluate the proposed method, and the results indicate that it
can be easily used to measure human driver styles. The experiments also showed
that we can calibrate a corresponding DM in a virtual testing environment with
up to 26% more accuracy than with fixed calibration methods.
Related papers
- MetaFollower: Adaptable Personalized Autonomous Car Following [63.90050686330677]
We propose an adaptable personalized car-following framework - MetaFollower.
We first utilize Model-Agnostic Meta-Learning (MAML) to extract common driving knowledge from various CF events.
We additionally combine Long Short-Term Memory (LSTM) and Intelligent Driver Model (IDM) to reflect temporal heterogeneity with high interpretability.
arXiv Detail & Related papers (2024-06-23T15:30:40Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Decision Making for Autonomous Driving in Interactive Merge Scenarios
via Learning-based Prediction [39.48631437946568]
This paper focuses on the complex task of merging into moving traffic where uncertainty emanates from the behavior of other drivers.
We frame the problem as a partially observable Markov decision process (POMDP) and solve it online with Monte Carlo tree search.
The solution to the POMDP is a policy that performs high-level driving maneuvers, such as giving way to an approaching car, keeping a safe distance from the vehicle in front or merging into traffic.
arXiv Detail & Related papers (2023-03-29T16:12:45Z) - Generative AI-empowered Simulation for Autonomous Driving in Vehicular
Mixed Reality Metaverses [130.15554653948897]
In vehicular mixed reality (MR) Metaverse, distance between physical and virtual entities can be overcome.
Large-scale traffic and driving simulation via realistic data collection and fusion from the physical world is difficult and costly.
We propose an autonomous driving architecture, where generative AI is leveraged to synthesize unlimited conditioned traffic and driving data in simulations.
arXiv Detail & Related papers (2023-02-16T16:54:10Z) - Exploring the trade off between human driving imitation and safety for
traffic simulation [0.34410212782758043]
We show that a trade-off exists between imitating human driving and maintaining safety when learning driving policies.
We propose a multi objective learning algorithm (MOPPO) that improves both objectives together.
arXiv Detail & Related papers (2022-08-09T14:30:19Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - A Hybrid Rule-Based and Data-Driven Approach to Driver Modeling through
Particle Filtering [6.9485501711137525]
We propose a methodology that combines rule-based modeling with data-driven learning.
Our results show that driver models based on our hybrid rule-based and data-driven approach can accurately capture real-world driving behavior.
arXiv Detail & Related papers (2021-08-29T11:07:14Z) - Learning to drive from a world on rails [78.28647825246472]
We learn an interactive vision-based driving policy from pre-recorded driving logs via a model-based approach.
A forward model of the world supervises a driving policy that predicts the outcome of any potential driving trajectory.
Our method ranks first on the CARLA leaderboard, attaining a 25% higher driving score while using 40 times less data.
arXiv Detail & Related papers (2021-05-03T05:55:30Z) - Learning Accurate and Human-Like Driving using Semantic Maps and
Attention [152.48143666881418]
This paper investigates how end-to-end driving models can be improved to drive more accurately and human-like.
We exploit semantic and visual maps from HERE Technologies and augment the existing Drive360 dataset with such.
Our models are trained and evaluated on the Drive360 + HERE dataset, which features 60 hours and 3000 km of real-world driving data.
arXiv Detail & Related papers (2020-07-10T22:25:27Z) - Deep Reinforcement Learning for Human-Like Driving Policies in Collision
Avoidance Tasks of Self-Driving Cars [1.160208922584163]
We introduce a model-free, deep reinforcement learning approach to generate automated human-like driving policies.
We study a static obstacle avoidance task on a two-lane highway road in simulation.
We demonstrate that our approach leads to human-like driving policies.
arXiv Detail & Related papers (2020-06-07T18:20:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.