Neurosymbolic Meta-Reinforcement Lookahead Learning Achieves Safe
Self-Driving in Non-Stationary Environments
- URL: http://arxiv.org/abs/2309.02328v1
- Date: Tue, 5 Sep 2023 15:47:40 GMT
- Title: Neurosymbolic Meta-Reinforcement Lookahead Learning Achieves Safe
Self-Driving in Non-Stationary Environments
- Authors: Haozhe Lei and Quanyan Zhu
- Abstract summary: This study introduces an algorithm for online meta-reinforcement learning, employing lookahead symbolic constraints based on emphNeurosymbolic Meta-Reinforcement Lookahead Learning (NUMERLA)
Experimental results demonstrate NUMERLA confers the self-driving agent with the capacity for real-time adaptability, leading to safe and self-adaptive driving under non-stationary urban human-vehicle interaction scenarios.
- Score: 17.39580032857777
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the area of learning-driven artificial intelligence advancement, the
integration of machine learning (ML) into self-driving (SD) technology stands
as an impressive engineering feat. Yet, in real-world applications outside the
confines of controlled laboratory scenarios, the deployment of self-driving
technology assumes a life-critical role, necessitating heightened attention
from researchers towards both safety and efficiency. To illustrate, when a
self-driving model encounters an unfamiliar environment in real-time execution,
the focus must not solely revolve around enhancing its anticipated performance;
equal consideration must be given to ensuring its execution or real-time
adaptation maintains a requisite level of safety. This study introduces an
algorithm for online meta-reinforcement learning, employing lookahead symbolic
constraints based on \emph{Neurosymbolic Meta-Reinforcement Lookahead Learning}
(NUMERLA). NUMERLA proposes a lookahead updating mechanism that harmonizes the
efficiency of online adaptations with the overarching goal of ensuring
long-term safety. Experimental results demonstrate NUMERLA confers the
self-driving agent with the capacity for real-time adaptability, leading to
safe and self-adaptive driving under non-stationary urban human-vehicle
interaction scenarios.
Related papers
- A Safe and Efficient Self-evolving Algorithm for Decision-making and Control of Autonomous Driving Systems [19.99282698119699]
Self-evolving autonomous vehicles are expected to cope with unknown scenarios in the real-world environment.
reinforcement learning is able to self evolve by learning the optimal policy.
This paper proposes a hybrid Mechanism-Experience-Learning augmented approach.
arXiv Detail & Related papers (2024-08-22T08:05:03Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - RACER: Rational Artificial Intelligence Car-following-model Enhanced by
Reality [51.244807332133696]
This paper introduces RACER, a cutting-edge deep learning car-following model to predict Adaptive Cruise Control (ACC) driving behavior.
Unlike conventional models, RACER effectively integrates Rational Driving Constraints (RDCs), crucial tenets of actual driving.
RACER excels across key metrics, such as acceleration, velocity, and spacing, registering zero violations.
arXiv Detail & Related papers (2023-12-12T06:21:30Z) - Analyze Drivers' Intervention Behavior During Autonomous Driving -- A
VR-incorporated Approach [2.7532019227694344]
This work sheds light on understanding human drivers' intervention behavior involved in the operation of autonomous vehicles.
Experiment environments were implemented where the virtual reality (VR) and traffic micro-simulation are integrated.
Performance indicators such as the probability of intervention, accident rates are defined and used to quantify and compare the risk levels.
arXiv Detail & Related papers (2023-12-04T06:36:57Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - Deception Game: Closing the Safety-Learning Loop in Interactive Robot
Autonomy [7.915956857741506]
Existing safety methods often neglect the robot's ability to learn and adapt at runtime, leading to overly conservative behavior.
This paper proposes a new closed-loop paradigm for synthesizing safe control policies that explicitly account for the robot's evolving uncertainty.
arXiv Detail & Related papers (2023-09-03T20:34:01Z) - Self-Aware Trajectory Prediction for Safe Autonomous Driving [9.868681330733764]
Trajectory prediction is one of the key components of the autonomous driving software stack.
In this paper, a self-aware trajectory prediction method is proposed.
The proposed method performed well in terms of self-awareness, memory footprint, and real-time performance.
arXiv Detail & Related papers (2023-05-16T03:53:23Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Differentiable Control Barrier Functions for Vision-based End-to-End
Autonomous Driving [100.57791628642624]
We introduce a safety guaranteed learning framework for vision-based end-to-end autonomous driving.
We design a learning system equipped with differentiable control barrier functions (dCBFs) that is trained end-to-end by gradient descent.
arXiv Detail & Related papers (2022-03-04T16:14:33Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.