Building Safer Autonomous Agents by Leveraging Risky Driving Behavior
Knowledge
- URL: http://arxiv.org/abs/2103.10245v1
- Date: Tue, 16 Mar 2021 23:39:33 GMT
- Title: Building Safer Autonomous Agents by Leveraging Risky Driving Behavior
Knowledge
- Authors: Ashish Rana, Avleen Malhi
- Abstract summary: This study focuses on creating risk prone scenarios with heavy traffic and unexpected random behavior for creating better model-free learning agents.
We generate multiple autonomous driving scenarios by creating new custom Markov Decision Process (MDP) environment iterations in highway-env simulation package.
We train model free learning agents with supplement information of risk prone driving scenarios and compare their performance with baseline agents.
- Score: 1.52292571922932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simulation environments are good for learning different driving tasks like
lane changing, parking or handling intersections etc. in an abstract manner.
However, these simulation environments often restrict themselves to operate
under conservative interactions behavior amongst different vehicles. But, as we
know that the real driving tasks often involves very high risk scenarios where
other drivers often don't behave in the expected sense. There can be many
reasons for this behavior like being tired or inexperienced. The simulation
environments doesn't take this information into account while training the
navigation agent. Therefore, in this study we especially focus on
systematically creating these risk prone scenarios with heavy traffic and
unexpected random behavior for creating better model-free learning agents. We
generate multiple autonomous driving scenarios by creating new custom Markov
Decision Process (MDP) environment iterations in highway-env simulation
package. The behavior policy is learnt by agents trained with the help from
deep reinforcement learning models. Our behavior policy is deliberated to
handle collisions and risky randomized driver behavior. We train model free
learning agents with supplement information of risk prone driving scenarios and
compare their performance with baseline agents. Finally, we casually measure
the impact of adding these perturbations in the training process to precisely
account for the performance improvement attained from utilizing the learnings
from these scenarios.
Related papers
- SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - Comprehensive Training and Evaluation on Deep Reinforcement Learning for
Automated Driving in Various Simulated Driving Maneuvers [0.4241054493737716]
This study implements, evaluating, and comparing the two DRL algorithms, Deep Q-networks (DQN) and Trust Region Policy Optimization (TRPO)
Models trained on the designed ComplexRoads environment can adapt well to other driving maneuvers with promising overall performance.
arXiv Detail & Related papers (2023-06-20T11:41:01Z) - Risk-Aware Reward Shaping of Reinforcement Learning Agents for
Autonomous Driving [6.613838702441967]
This paper investigates how to use risk-aware reward shaping to leverage the training and test performance of RL agents in autonomous driving.
We propose additional reshaped reward terms that encourage exploration and penalize risky driving behaviors.
arXiv Detail & Related papers (2023-06-05T20:10:36Z) - Discrete Control in Real-World Driving Environments using Deep
Reinforcement Learning [2.467408627377504]
We introduce a framework (perception, planning, and control) in a real-world driving environment that transfers the real-world environments into gaming environments.
We propose variations of existing Reinforcement Learning (RL) algorithms in a multi-agent setting to learn and execute the discrete control in real-world environments.
arXiv Detail & Related papers (2022-11-29T04:24:03Z) - Exploring the trade off between human driving imitation and safety for
traffic simulation [0.34410212782758043]
We show that a trade-off exists between imitating human driving and maintaining safety when learning driving policies.
We propose a multi objective learning algorithm (MOPPO) that improves both objectives together.
arXiv Detail & Related papers (2022-08-09T14:30:19Z) - Symphony: Learning Realistic and Diverse Agents for Autonomous Driving
Simulation [45.09881984441893]
We propose Symphony, which greatly improves realism by combining conventional policies with a parallel beam search.
Symphony addresses this issue with a hierarchical approach, factoring agent behaviour into goal generation and goal conditioning.
Experiments confirm that Symphony agents learn more realistic and diverse behaviour than several baselines.
arXiv Detail & Related papers (2022-05-06T13:21:40Z) - Causal Imitative Model for Autonomous Driving [85.78593682732836]
We propose Causal Imitative Model (CIM) to address inertia and collision problems.
CIM explicitly discovers the causal model and utilizes it to train the policy.
Our experiments show that our method outperforms previous work in terms of inertia and collision rates.
arXiv Detail & Related papers (2021-12-07T18:59:15Z) - Learning Interactive Driving Policies via Data-driven Simulation [125.97811179463542]
Data-driven simulators promise high data-efficiency for driving policy learning.
Small underlying datasets often lack interesting and challenging edge cases for learning interactive driving.
We propose a simulation method that uses in-painted ado vehicles for learning robust driving policies.
arXiv Detail & Related papers (2021-11-23T20:14:02Z) - TrafficSim: Learning to Simulate Realistic Multi-Agent Behaviors [74.67698916175614]
We propose TrafficSim, a multi-agent behavior model for realistic traffic simulation.
In particular, we leverage an implicit latent variable model to parameterize a joint actor policy.
We show TrafficSim generates significantly more realistic and diverse traffic scenarios as compared to a diverse set of baselines.
arXiv Detail & Related papers (2021-01-17T00:29:30Z) - Safe Reinforcement Learning via Curriculum Induction [94.67835258431202]
In safety-critical applications, autonomous agents may need to learn in an environment where mistakes can be very costly.
Existing safe reinforcement learning methods make an agent rely on priors that let it avoid dangerous situations.
This paper presents an alternative approach inspired by human teaching, where an agent learns under the supervision of an automatic instructor.
arXiv Detail & Related papers (2020-06-22T10:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.