Application of Neuroevolution in Autonomous Cars
- URL: http://arxiv.org/abs/2006.15175v1
- Date: Fri, 26 Jun 2020 19:06:32 GMT
- Title: Application of Neuroevolution in Autonomous Cars
- Authors: Sainath G, Vignesh S, Siddarth S, G Suganya
- Abstract summary: We propose a system that requires no data for its training. An evolutionary model would have the capability to optimize itself towards the fitness function.
We have implemented Neuroevolution, a form of genetic algorithm, to train/evolve self-driving cars in a simulated virtual environment with the help of Unreal Engine 4.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the onset of Electric vehicles, and them becoming more and more popular,
autonomous cars are the future in the travel/driving experience. The barrier to
reaching level 5 autonomy is the difficulty in the collection of data that
incorporates good driving habits and the lack thereof. The problem with current
implementations of self-driving cars is the need for massively large datasets
and the need to evaluate the driving in the dataset. We propose a system that
requires no data for its training. An evolutionary model would have the
capability to optimize itself towards the fitness function. We have implemented
Neuroevolution, a form of genetic algorithm, to train/evolve self-driving cars
in a simulated virtual environment with the help of Unreal Engine 4, which
utilizes Nvidia's PhysX Physics Engine to portray real-world vehicle dynamics
accurately. We were able to observe the serendipitous nature of evolution and
have exploited it to reach our optimal solution. We also demonstrate the ease
in generalizing attributes brought about by genetic algorithms and how they may
be used as a boilerplate upon which other machine learning techniques may be
used to improve the overall driving experience.
Related papers
- Are you a robot? Detecting Autonomous Vehicles from Behavior Analysis [6.422370188350147]
We present a framework that monitors active vehicles using camera images and state information in order to determine whether vehicles are autonomous.
Essentially, it builds on the cooperation among vehicles, which share their data acquired on the road feeding a machine learning model to identify autonomous cars.
Experiments show it is possible to discriminate the two behaviors by analyzing video clips with an accuracy of 80%, which improves up to 93% when the target state information is available.
arXiv Detail & Related papers (2024-03-14T17:00:29Z) - Scaling Is All You Need: Autonomous Driving with JAX-Accelerated Reinforcement Learning [9.25541290397848]
Reinforcement learning has been demonstrated to outperform even the best humans in complex domains like video games.
We conduct large-scale reinforcement learning experiments for autonomous driving.
Our best performing policy reduces the failure rate by 64% while improving the rate of driving progress by 25% compared to the policies produced by state-of-the-art machine learning for autonomous driving.
arXiv Detail & Related papers (2023-12-23T00:07:06Z) - Generative AI-empowered Simulation for Autonomous Driving in Vehicular
Mixed Reality Metaverses [130.15554653948897]
In vehicular mixed reality (MR) Metaverse, distance between physical and virtual entities can be overcome.
Large-scale traffic and driving simulation via realistic data collection and fusion from the physical world is difficult and costly.
We propose an autonomous driving architecture, where generative AI is leveraged to synthesize unlimited conditioned traffic and driving data in simulations.
arXiv Detail & Related papers (2023-02-16T16:54:10Z) - Policy Pre-training for End-to-end Autonomous Driving via
Self-supervised Geometric Modeling [96.31941517446859]
We propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving.
We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos.
In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input.
In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only.
arXiv Detail & Related papers (2023-01-03T08:52:49Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Data generation using simulation technology to improve perception
mechanism of autonomous vehicles [0.0]
We will demonstrate the effectiveness of combining data gathered from the real world with data generated in the simulated world to train perception systems.
We will also propose a multi-level deep learning perception framework that aims to emulate a human learning experience.
arXiv Detail & Related papers (2022-07-01T03:42:33Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - An Adaptive Human Driver Model for Realistic Race Car Simulations [25.67586167621258]
We provide a better understanding of race driver behavior and introduce an adaptive human race driver model based on imitation learning.
We show that our framework can create realistic driving line distributions on unseen race tracks with almost human-like performance.
arXiv Detail & Related papers (2022-03-03T18:39:50Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement
Learning [13.699336307578488]
Deep imitative reinforcement learning approach (DIRL) achieves agile autonomous racing using visual inputs.
We validate our algorithm both in a high-fidelity driving simulation and on a real-world 1/20-scale RC-car with limited onboard computation.
arXiv Detail & Related papers (2021-07-18T00:00:48Z) - Learning Accurate and Human-Like Driving using Semantic Maps and
Attention [152.48143666881418]
This paper investigates how end-to-end driving models can be improved to drive more accurately and human-like.
We exploit semantic and visual maps from HERE Technologies and augment the existing Drive360 dataset with such.
Our models are trained and evaluated on the Drive360 + HERE dataset, which features 60 hours and 3000 km of real-world driving data.
arXiv Detail & Related papers (2020-07-10T22:25:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.