Brain-Inspired Deep Imitation Learning for Autonomous Driving Systems
- URL: http://arxiv.org/abs/2107.14654v1
- Date: Fri, 30 Jul 2021 14:21:46 GMT
- Title: Brain-Inspired Deep Imitation Learning for Autonomous Driving Systems
- Authors: Hasan Bayarov Ahmedov, Dewei Yi, Jie Sui
- Abstract summary: Humans have a strong generalisation ability which is beneficial from the structural and functional asymmetry of the two sides of the brain.
Here, we design dual Neural Circuit Policy (NCP) architectures in deep neural networks based on the asymmetry of human neural networks.
Experimental results demonstrate that our brain-inspired method outperforms existing methods regarding generalisation when dealing with unseen data.
- Score: 0.38673630752805443
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous driving has attracted great attention from both academics and
industries. To realise autonomous driving, Deep Imitation Learning (DIL) is
treated as one of the most promising solutions, because it improves autonomous
driving systems by automatically learning a complex mapping from human driving
data, compared to manually designing the driving policy. However, existing DIL
methods cannot generalise well across domains, that is, a network trained on
the data of source domain gives rise to poor generalisation on the data of
target domain. In the present study, we propose a novel brain-inspired deep
imitation method that builds on the evidence from human brain functions, to
improve the generalisation ability of deep neural networks so that autonomous
driving systems can perform well in various scenarios. Specifically, humans
have a strong generalisation ability which is beneficial from the structural
and functional asymmetry of the two sides of the brain. Here, we design dual
Neural Circuit Policy (NCP) architectures in deep neural networks based on the
asymmetry of human neural networks. Experimental results demonstrate that our
brain-inspired method outperforms existing methods regarding generalisation
when dealing with unseen data. Our source codes and pretrained models are
available at
https://github.com/Intenzo21/Brain-Inspired-Deep-Imitation-Learning-for-Autonomous-Driving-Systems}{https://github.com/Intenzo21/Brain-Inspired-Deep-Imitation-Learning-for-Autonomous-Driving-Systems.
Related papers
- Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - A Language Agent for Autonomous Driving [31.359413767191608]
We propose a paradigm shift to integrate human-like intelligence into autonomous driving systems.
Our approach, termed Agent-Driver, transforms the traditional autonomous driving pipeline by introducing a versatile tool library.
Powered by Large Language Models (LLMs), our Agent-Driver is endowed with intuitive common sense and robust reasoning capabilities.
arXiv Detail & Related papers (2023-11-17T18:59:56Z) - LLM4Drive: A Survey of Large Language Models for Autonomous Driving [62.10344445241105]
Large language models (LLMs) have demonstrated abilities including understanding context, logical reasoning, and generating answers.
In this paper, we systematically review a research line about textitLarge Language Models for Autonomous Driving (LLM4AD).
arXiv Detail & Related papers (2023-11-02T07:23:33Z) - Decentralized Motor Skill Learning for Complex Robotic Systems [5.669790037378093]
We propose a Decentralized motor skill (DEMOS) learning algorithm to automatically discover motor groups that can be decoupled from each other.
Our method improves the robustness and generalization of the policy without sacrificing performance.
Experiments on quadruped and humanoid robots demonstrate that the learned policy is robust against local motor malfunctions and can be transferred to new tasks.
arXiv Detail & Related papers (2023-06-30T05:55:34Z) - KARNet: Kalman Filter Augmented Recurrent Neural Network for Learning
World Models in Autonomous Driving Tasks [11.489187712465325]
We present a Kalman filter augmented recurrent neural network architecture to learn the latent representation of the traffic flow using front camera images only.
Results show that incorporating an explicit model of the vehicle (states estimated using Kalman filtering) in the end-to-end learning significantly increases performance.
arXiv Detail & Related papers (2023-05-24T02:27:34Z) - Reinforcement Learning in an Adaptable Chess Environment for Detecting
Human-understandable Concepts [0.0]
We show a method for probing which concepts self-learning agents internalise in the course of their training.
For demonstration, we use a chess playing agent in a fast and light environment developed specifically to be suitable for research groups.
arXiv Detail & Related papers (2022-11-10T11:48:10Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement
Learning [13.699336307578488]
Deep imitative reinforcement learning approach (DIRL) achieves agile autonomous racing using visual inputs.
We validate our algorithm both in a high-fidelity driving simulation and on a real-world 1/20-scale RC-car with limited onboard computation.
arXiv Detail & Related papers (2021-07-18T00:00:48Z) - Neural Dynamic Policies for End-to-End Sensorimotor Learning [51.24542903398335]
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces.
We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space.
NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks.
arXiv Detail & Related papers (2020-12-04T18:59:32Z) - LaND: Learning to Navigate from Disengagements [158.6392333480079]
We present a reinforcement learning approach for learning to navigate from disengagements, or LaND.
LaND learns a neural network model that predicts which actions lead to disengagements given the current sensory observation, and then at test time plans and executes actions that avoid disengagements.
Our results demonstrate LaND can successfully learn to navigate in diverse, real world sidewalk environments, outperforming both imitation learning and reinforcement learning approaches.
arXiv Detail & Related papers (2020-10-09T17:21:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.