Symbolic Imitation Learning: From Black-Box to Explainable Driving
Policies
- URL: http://arxiv.org/abs/2309.16025v1
- Date: Wed, 27 Sep 2023 21:03:45 GMT
- Title: Symbolic Imitation Learning: From Black-Box to Explainable Driving
Policies
- Authors: Iman Sharifi and Saber Fallah
- Abstract summary: We introduce Symbolic Learning (SIL) to learn driving policies which are transparent, explainable and generalisable from available datasets.
Our results demonstrate that SIL not only enhances the interpretability of driving policies but also significantly improves their applicability across varied driving situations.
- Score: 5.977871949434069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current methods of imitation learning (IL), primarily based on deep neural
networks, offer efficient means for obtaining driving policies from real-world
data but suffer from significant limitations in interpretability and
generalizability. These shortcomings are particularly concerning in
safety-critical applications like autonomous driving. In this paper, we address
these limitations by introducing Symbolic Imitation Learning (SIL), a
groundbreaking method that employs Inductive Logic Programming (ILP) to learn
driving policies which are transparent, explainable and generalisable from
available datasets. Utilizing the real-world highD dataset, we subject our
method to a rigorous comparative analysis against prevailing
neural-network-based IL methods. Our results demonstrate that SIL not only
enhances the interpretability of driving policies but also significantly
improves their applicability across varied driving situations. Hence, this work
offers a novel pathway to more reliable and safer autonomous driving systems,
underscoring the potential of integrating ILP into the domain of IL.
Related papers
- Distilling Reinforcement Learning Policies for Interpretable Robot Locomotion: Gradient Boosting Machines and Symbolic Regression [53.33734159983431]
This paper introduces a novel approach to distill neural RL policies into more interpretable forms.
We train expert neural network policies using RL and distill them into (i) GBMs, (ii) EBMs, and (iii) symbolic policies.
arXiv Detail & Related papers (2024-03-21T11:54:45Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - LLM4Drive: A Survey of Large Language Models for Autonomous Driving [62.10344445241105]
Large language models (LLMs) have demonstrated abilities including understanding context, logical reasoning, and generating answers.
In this paper, we systematically review a research line about textitLarge Language Models for Autonomous Driving (LLM4AD).
arXiv Detail & Related papers (2023-11-02T07:23:33Z) - Towards Safe Autonomous Driving Policies using a Neuro-Symbolic Deep
Reinforcement Learning Approach [6.961253535504979]
This paper introduces a novel neuro-symbolic model-free DRL approach, called DRL with Symbolic Logics (DRLSL)
It combines the strengths of DRL (learning from experience) and symbolic first-order logics (knowledge-driven reasoning) to enable safe learning in real-time interactions of autonomous driving within real environments.
We have implemented the DRLSL framework in autonomous driving using the highD dataset and demonstrated that our method successfully avoids unsafe actions during both the training and testing phases.
arXiv Detail & Related papers (2023-07-03T19:43:21Z) - Learning from Demonstrations of Critical Driving Behaviours Using
Driver's Risk Field [4.272601420525791]
imitation learning (IL) has been widely used in industry as the core of autonomous vehicle (AV) planning modules.
Previous IL works show sample inefficiency and low generalisation in safety-critical scenarios, on which they are rarely tested.
We present an IL model using the spline coefficient parameterisation and offline expert queries to enhance safety and training efficiency.
arXiv Detail & Related papers (2022-10-04T17:07:35Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Approximating a deep reinforcement learning docking agent using linear
model trees [0.0]
linear model tree (LMT) approximates a DNN policy for an autonomous surface vehicle with five control inputs performing a docking operation.
LMTs are transparent which makes it possible to associate directly the outputs (control actions) with specific values of the input features.
In our simulations, the opaque DNN policy controls the vehicle and the LMT runs in parallel to provide explanations in the form of feature attributions.
arXiv Detail & Related papers (2022-03-01T11:32:07Z) - Learning Interactive Driving Policies via Data-driven Simulation [125.97811179463542]
Data-driven simulators promise high data-efficiency for driving policy learning.
Small underlying datasets often lack interesting and challenging edge cases for learning interactive driving.
We propose a simulation method that uses in-painted ado vehicles for learning robust driving policies.
arXiv Detail & Related papers (2021-11-23T20:14:02Z) - UMBRELLA: Uncertainty-Aware Model-Based Offline Reinforcement Learning
Leveraging Planning [1.1339580074756188]
Offline reinforcement learning (RL) provides a framework for learning decision-making from offline data.
Self-driving vehicles (SDV) learn a policy, which potentially even outperforms the behavior in the sub-optimal data set.
This motivates the use of model-based offline RL approaches, which leverage planning.
arXiv Detail & Related papers (2021-11-22T10:37:52Z) - Guided Uncertainty-Aware Policy Optimization: Combining Learning and
Model-Based Strategies for Sample-Efficient Policy Learning [75.56839075060819]
Traditional robotic approaches rely on an accurate model of the environment, a detailed description of how to perform the task, and a robust perception system to keep track of the current state.
reinforcement learning approaches can operate directly from raw sensory inputs with only a reward signal to describe the task, but are extremely sample-inefficient and brittle.
In this work, we combine the strengths of model-based methods with the flexibility of learning-based methods to obtain a general method that is able to overcome inaccuracies in the robotics perception/actuation pipeline.
arXiv Detail & Related papers (2020-05-21T19:47:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.