Imitation Learning for Autonomous Driving: Insights from Real-World Testing
- URL: http://arxiv.org/abs/2504.18847v1
- Date: Sat, 26 Apr 2025 08:21:12 GMT
- Title: Imitation Learning for Autonomous Driving: Insights from Real-World Testing
- Authors: Hidayet Ersin Dursun, Yusuf Güven, Tufan Kumbasar,
- Abstract summary: This work focuses on the design of a deep learning-based autonomous driving system deployed and tested on the real-world MIT Racecar.<n>The Deep Neural Network (DNN) translates raw image inputs into real-time steering commands in an end-to-end learning fashion.
- Score: 2.526146573337397
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This work focuses on the design of a deep learning-based autonomous driving system deployed and tested on the real-world MIT Racecar to assess its effectiveness in driving scenarios. The Deep Neural Network (DNN) translates raw image inputs into real-time steering commands in an end-to-end learning fashion, following the imitation learning framework. The key design challenge is to ensure that DNN predictions are accurate and fast enough, at a high sampling frequency, and result in smooth vehicle operation under different operating conditions. In this study, we design and compare various DNNs, to identify the most effective approach for real-time autonomous driving. In designing the DNNs, we adopted an incremental design approach that involved enhancing the model capacity and dataset to address the challenges of real-world driving scenarios. We designed a PD system, CNN, CNN-LSTM, and CNN-NODE, and evaluated their performance on the real-world MIT Racecar. While the PD system handled basic lane following, it struggled with sharp turns and lighting variations. The CNN improved steering but lacked temporal awareness, which the CNN-LSTM addressed as it resulted in smooth driving performance. The CNN-NODE performed similarly to the CNN-LSTM in handling driving dynamics, yet with slightly better driving performance. The findings of this research highlight the importance of iterative design processes in developing robust DNNs for autonomous driving applications. The experimental video is available at https://www.youtube.com/watch?v=FNNYgU--iaY.
Related papers
- Enhancing End-to-End Autonomous Driving with Latent World Model [78.22157677787239]
We propose a novel self-supervised learning approach using the LAtent World model (LAW) for end-to-end driving.<n> LAW predicts future scene features based on current features and ego trajectories.<n>This self-supervised task can be seamlessly integrated into perception-free and perception-based frameworks.
arXiv Detail & Related papers (2024-06-12T17:59:21Z) - Autonomous Driving using Spiking Neural Networks on Dynamic Vision Sensor Data: A Case Study of Traffic Light Change Detection [0.0]
Spiking neural networks (SNNs) provide an alternative computational model to process information and make decisions.<n>Recent work using SNNs for autonomous driving mostly focused on simple tasks like lane keeping in simplified simulation environments.<n>This paper studies SNNs on photo-realistic driving scenes in the CARLA simulator, which is an important step toward using SNNs on real vehicles.
arXiv Detail & Related papers (2023-09-27T23:31:30Z) - KARNet: Kalman Filter Augmented Recurrent Neural Network for Learning
World Models in Autonomous Driving Tasks [11.489187712465325]
We present a Kalman filter augmented recurrent neural network architecture to learn the latent representation of the traffic flow using front camera images only.
Results show that incorporating an explicit model of the vehicle (states estimated using Kalman filtering) in the end-to-end learning significantly increases performance.
arXiv Detail & Related papers (2023-05-24T02:27:34Z) - Generative AI-empowered Simulation for Autonomous Driving in Vehicular
Mixed Reality Metaverses [130.15554653948897]
In vehicular mixed reality (MR) Metaverse, distance between physical and virtual entities can be overcome.
Large-scale traffic and driving simulation via realistic data collection and fusion from the physical world is difficult and costly.
We propose an autonomous driving architecture, where generative AI is leveraged to synthesize unlimited conditioned traffic and driving data in simulations.
arXiv Detail & Related papers (2023-02-16T16:54:10Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Multi-task UNet architecture for end-to-end autonomous driving [0.0]
We propose an end-to-end driving model that integrates a multi-task UNet (MTUNet) architecture and control algorithms in a pipeline of data flow from a front camera through this model to driving decisions.
It provides quantitative measures to evaluate the holistic, dynamic, and real-time performance of end-to-end driving systems and thus the safety and interpretability of MTUNet.
arXiv Detail & Related papers (2021-12-16T15:35:15Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - Bayesian Optimization and Deep Learning forsteering wheel angle
prediction [58.720142291102135]
This work aims to obtain an accurate model for the prediction of the steering angle in an automated driving system.
BO was able to identify, within a limited number of trials, a model -- namely BOST-LSTM -- which resulted, the most accurate when compared to classical end-to-end driving models.
arXiv Detail & Related papers (2021-10-22T15:25:14Z) - Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement
Learning [13.699336307578488]
Deep imitative reinforcement learning approach (DIRL) achieves agile autonomous racing using visual inputs.
We validate our algorithm both in a high-fidelity driving simulation and on a real-world 1/20-scale RC-car with limited onboard computation.
arXiv Detail & Related papers (2021-07-18T00:00:48Z) - Driving Style Representation in Convolutional Recurrent Neural Network
Model of Driver Identification [8.007800530105191]
We present a deep-neural-network architecture, we term D-CRNN, for building high-fidelity representations for driving style.
Using CNN, we capture semantic patterns of driver behavior from trajectories.
We then find temporal dependencies between these semantic patterns using RNN to encode driving style.
arXiv Detail & Related papers (2021-02-11T04:33:43Z) - End-to-End Deep Learning of Lane Detection and Path Prediction for
Real-Time Autonomous Driving [0.0]
We propose an end-to-end three-task convolutional neural network (3TCNN) for lane detection and road recognition.
Based on 3TCNN, we then propose lateral offset and path prediction (PP) algorithms to form an integrated model (3TCNN-PP)
We also develop a CNN-PP simulator that can be used to train a CNN by real or artificial traffic images, test it by artificial images, quantify its dynamic errors, and visualize its qualitative performance.
arXiv Detail & Related papers (2021-02-09T10:04:39Z) - Temporal Pulses Driven Spiking Neural Network for Fast Object
Recognition in Autonomous Driving [65.36115045035903]
We propose an approach to address the object recognition problem directly with raw temporal pulses utilizing the spiking neural network (SNN)
Being evaluated on various datasets, our proposed method has shown comparable performance as the state-of-the-art methods, while achieving remarkable time efficiency.
arXiv Detail & Related papers (2020-01-24T22:58:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.