Neural Network Based Model Predictive Control for an Autonomous Vehicle
- URL: http://arxiv.org/abs/2107.14573v1
- Date: Fri, 30 Jul 2021 12:11:31 GMT
- Title: Neural Network Based Model Predictive Control for an Autonomous Vehicle
- Authors: Maria Luiza Costa Vianna, Eric Goubault, Sylvie Putot
- Abstract summary: We study learning based controllers as a replacement for model predictive controllers (MPC) for the control of autonomous vehicles.
We compare training by supervised learning and by reinforcement learning.
This work aims at producing controllers that can both be embedded on real-time platforms and amenable to verification by formal methods techniques.
- Score: 3.222802562733787
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study learning based controllers as a replacement for model predictive
controllers (MPC) for the control of autonomous vehicles. We concentrate for
the experiments on the simple yet representative bicycle model. We compare
training by supervised learning and by reinforcement learning. We also discuss
the neural net architectures so as to obtain small nets with the best
performances. This work aims at producing controllers that can both be embedded
on real-time platforms and amenable to verification by formal methods
techniques.
Related papers
- Learning a Stable, Safe, Distributed Feedback Controller for a Heterogeneous Platoon of Autonomous Vehicles [5.289123253466164]
We introduce an algorithm for learning a stable, safe, distributed controller for a heterogeneous platoon.
We train a controller for autonomous platooning in simulation and evaluate its performance on hardware with a platoon of four F1Tenth vehicles.
arXiv Detail & Related papers (2024-04-18T19:11:34Z) - Modelling, Positioning, and Deep Reinforcement Learning Path Tracking
Control of Scaled Robotic Vehicles: Design and Experimental Validation [3.807917169053206]
Scaled robotic cars are commonly equipped with a hierarchical control acthiecture that includes tasks dedicated to vehicle state estimation and control.
This paper covers both aspects by proposing (i) a federeted extended Kalman filter (FEKF) and (ii) a novel deep reinforcement learning (DRL) path tracking controller trained via an expert demonstrator.
The experimentally validated model is used for (i) supporting the design of the FEKF and (ii) serving as a digital twin for training the proposed DRL-based path tracking algorithm.
arXiv Detail & Related papers (2024-01-10T14:40:53Z) - Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for
Autonomous Real-World Reinforcement Learning [58.3994826169858]
We introduce RoboFuME, a reset-free fine-tuning system for robotic reinforcement learning.
Our insights are to utilize offline reinforcement learning techniques to ensure efficient online fine-tuning of a pre-trained policy.
Our method can incorporate data from an existing robot dataset and improve on a target task within as little as 3 hours of autonomous real-world experience.
arXiv Detail & Related papers (2023-10-23T17:50:08Z) - Online Dynamics Learning for Predictive Control with an Application to
Aerial Robots [3.673994921516517]
Even though prediction models can be learned and applied to model-based controllers, these models are often learned offline.
In this offline setting, training data is first collected and a prediction model is learned through an elaborated training procedure.
We propose an online dynamics learning framework that continually improves the accuracy of the dynamic model during deployment.
arXiv Detail & Related papers (2022-07-19T15:51:25Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Reinforcement Learning with Action-Free Pre-Training from Videos [95.25074614579646]
We introduce a framework that learns representations useful for understanding the dynamics via generative pre-training on videos.
Our framework significantly improves both final performances and sample-efficiency of vision-based reinforcement learning.
arXiv Detail & Related papers (2022-03-25T19:44:09Z) - Real-time Neural-MPC: Deep Learning Model Predictive Control for
Quadrotors and Agile Robotic Platforms [59.03426963238452]
We present Real-time Neural MPC, a framework to efficiently integrate large, complex neural network architectures as dynamics models within a model-predictive control pipeline.
We show the feasibility of our framework on real-world problems by reducing the positional tracking error by up to 82% when compared to state-of-the-art MPC approaches without neural network dynamics.
arXiv Detail & Related papers (2022-03-15T09:38:15Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z) - Learning-based vs Model-free Adaptive Control of a MAV under Wind Gust [0.2770822269241973]
Navigation problems under unknown varying conditions are among the most important and well-studied problems in the control field.
Recent model-free adaptive control methods aim at removing this dependency by learning the physical characteristics of the plant directly from sensor feedback.
We propose a conceptually simple learning-based approach composed of a full state feedback controller, tuned robustly by a deep reinforcement learning framework.
arXiv Detail & Related papers (2021-01-29T10:13:56Z) - Learning a Contact-Adaptive Controller for Robust, Efficient Legged
Locomotion [95.1825179206694]
We present a framework that synthesizes robust controllers for a quadruped robot.
A high-level controller learns to choose from a set of primitives in response to changes in the environment.
A low-level controller that utilizes an established control method to robustly execute the primitives.
arXiv Detail & Related papers (2020-09-21T16:49:26Z) - Structured Mechanical Models for Robot Learning and Control [38.52004843488286]
Black-box neural networks suffer from data-inefficiency and the difficulty to incorporate prior knowledge.
We introduce Structured Mechanical Models that are data-efficient, easily amenable to prior knowledge, and easily usable with model-based control techniques.
We demonstrate that they generalize better from limited data and yield more reliable model-based controllers on a variety of simulated robotic domains.
arXiv Detail & Related papers (2020-04-21T21:12:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.