Cloud2Edge Elastic AI Framework for Prototyping and Deployment of AI
Inference Engines in Autonomous Vehicles
- URL: http://arxiv.org/abs/2009.11722v1
- Date: Wed, 23 Sep 2020 09:23:29 GMT
- Title: Cloud2Edge Elastic AI Framework for Prototyping and Deployment of AI
Inference Engines in Autonomous Vehicles
- Authors: Sorin Grigorescu, Tiberiu Cocias, Bogdan Trasnea, Andrea Margheri,
Federico Lombardi, Leonardo Aniello
- Abstract summary: This paper proposes a novel framework for developing AI Inference Engines for autonomous driving applications based on deep learning modules.
We introduce a simple yet elegant solution for the AI components development cycle, where prototyping takes place in the cloud according to the Software-in-the-Loop (SiL) paradigm.
The effectiveness of the proposed framework is demonstrated using two real-world use-cases of AI inference engines for autonomous vehicles.
- Score: 1.688204090869186
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-driving cars and autonomous vehicles are revolutionizing the automotive
sector, shaping the future of mobility altogether. Although the integration of
novel technologies such as Artificial Intelligence (AI) and Cloud/Edge
computing provides golden opportunities to improve autonomous driving
applications, there is the need to modernize accordingly the whole prototyping
and deployment cycle of AI components. This paper proposes a novel framework
for developing so-called AI Inference Engines for autonomous driving
applications based on deep learning modules, where training tasks are deployed
elastically over both Cloud and Edge resources, with the purpose of reducing
the required network bandwidth, as well as mitigating privacy issues. Based on
our proposed data driven V-Model, we introduce a simple yet elegant solution
for the AI components development cycle, where prototyping takes place in the
cloud according to the Software-in-the-Loop (SiL) paradigm, while deployment
and evaluation on the target ECUs (Electronic Control Units) is performed as
Hardware-in-the-Loop (HiL) testing. The effectiveness of the proposed framework
is demonstrated using two real-world use-cases of AI inference engines for
autonomous vehicles, that is environment perception and most probable path
prediction.
Related papers
- Self-Driving Car Racing: Application of Deep Reinforcement Learning [0.0]
The project aims to develop an AI agent that efficiently drives a simulated car in the OpenAI Gymnasium CarRacing environment.
We investigate various RL algorithms, including Deep Q-Network (DQN), Proximal Policy Optimization (PPO), and novel adaptations that incorporate transfer learning and recurrent neural networks (RNNs) for enhanced performance.
arXiv Detail & Related papers (2024-10-30T07:32:25Z) - Generative Diffusion-based Contract Design for Efficient AI Twins Migration in Vehicular Embodied AI Networks [55.15079732226397]
Embodied AI is a rapidly advancing field that bridges the gap between cyberspace and physical space.
In VEANET, embodied AI twins act as in-vehicle AI assistants to perform diverse tasks supporting autonomous driving.
arXiv Detail & Related papers (2024-10-02T02:20:42Z) - The AI-Native Software Development Lifecycle: A Theoretical and Practical New Methodology [0.0]
This white paper proposes the emergence of a fully AI-native SDLC.
We introduce the V-Bounce model, an adaptation of the traditional V-model that incorporates AI from end to end.
This model redefines the role of humans from primary implementers to primarily validators and verifiers with AI acting as an implementation engine.
arXiv Detail & Related papers (2024-08-06T19:30:49Z) - Autonomous Vehicles: Evolution of Artificial Intelligence and Learning
Algorithms [0.0]
The study presents statistical insights into the usage and types of AI/learning algorithms over the years.
The paper highlights the pivotal role of parameters in refining algorithms for both trucks and cars.
It concludes by outlining different levels of autonomy, elucidating the nuanced usage of AI and learning algorithms.
arXiv Detail & Related papers (2024-02-27T17:07:18Z) - Forging Vision Foundation Models for Autonomous Driving: Challenges,
Methodologies, and Opportunities [59.02391344178202]
Vision foundation models (VFMs) serve as potent building blocks for a wide range of AI applications.
The scarcity of comprehensive training data, the need for multi-sensor integration, and the diverse task-specific architectures pose significant obstacles to the development of VFMs.
This paper delves into the critical challenge of forging VFMs tailored specifically for autonomous driving, while also outlining future directions.
arXiv Detail & Related papers (2024-01-16T01:57:24Z) - LLM4Drive: A Survey of Large Language Models for Autonomous Driving [62.10344445241105]
Large language models (LLMs) have demonstrated abilities including understanding context, logical reasoning, and generating answers.
In this paper, we systematically review a research line about textitLarge Language Models for Autonomous Driving (LLM4AD).
arXiv Detail & Related papers (2023-11-02T07:23:33Z) - Generative AI-empowered Simulation for Autonomous Driving in Vehicular
Mixed Reality Metaverses [130.15554653948897]
In vehicular mixed reality (MR) Metaverse, distance between physical and virtual entities can be overcome.
Large-scale traffic and driving simulation via realistic data collection and fusion from the physical world is difficult and costly.
We propose an autonomous driving architecture, where generative AI is leveraged to synthesize unlimited conditioned traffic and driving data in simulations.
arXiv Detail & Related papers (2023-02-16T16:54:10Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - CARNet: A Dynamic Autoencoder for Learning Latent Dynamics in Autonomous
Driving Tasks [11.489187712465325]
An autonomous driving system should effectively use the information collected from the various sensors in order to form an abstract description of the world.
Deep learning models, such as autoencoders, can be used for that purpose, as they can learn compact latent representations from a stream of incoming data.
This work proposes CARNet, a Combined dynAmic autoencodeR NETwork architecture that utilizes an autoencoder combined with a recurrent neural network to learn the current latent representation.
arXiv Detail & Related papers (2022-05-18T04:15:42Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.