TS-MPC for Autonomous Vehicle using a Learning Approach
- URL: http://arxiv.org/abs/2004.14362v1
- Date: Wed, 29 Apr 2020 17:42:33 GMT
- Title: TS-MPC for Autonomous Vehicle using a Learning Approach
- Authors: Eugenio Alcal\'a, Olivier Sename, Vicen\c{c} Puig, and Joseba Quevedo
- Abstract summary: We use a data-driven approach to learn a Takagi-Sugeno (TS) representation of the vehicle dynamics.
To address the TS modeling, we use the Adaptive Neuro-Fuzzy Inference System (ANFIS) approach.
The proposed control approach is provided by racing-based references of an external planner and estimations from the MHE.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, the Model Predictive Control (MPC) and Moving Horizon
Estimator (MHE) strategies using a data-driven approach to learn a
Takagi-Sugeno (TS) representation of the vehicle dynamics are proposed to solve
autonomous driving control problems in real-time. To address the TS modeling,
we use the Adaptive Neuro-Fuzzy Inference System (ANFIS) approach to obtain a
set of polytopic-based linear representations as well as a set of membership
functions relating in a non-linear way the different linear subsystems. The
proposed control approach is provided by racing-based references of an external
planner and estimations from the MHE offering a high driving performance in
racing mode. The control-estimation scheme is tested in a simulated racing
environment to show the potential of the presented approaches.
Related papers
- A Tricycle Model to Accurately Control an Autonomous Racecar with Locked
Differential [71.53284767149685]
We present a novel formulation to model the effects of a locked differential on the lateral dynamics of an autonomous open-wheel racecar.
We include a micro-steps discretization approach to accurately linearize the dynamics and produce a prediction suitable for real-time implementation.
arXiv Detail & Related papers (2023-12-22T16:29:55Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - Reinforcement Learning with Model Predictive Control for Highway Ramp Metering [14.389086937116582]
This work explores the synergy between model-based and learning-based strategies to enhance traffic flow management.
The control problem is formulated as an RL task by crafting a suitable stage cost function.
An MPC-based RL approach, which leverages the MPC optimal problem as a function approximation for the RL algorithm, is proposed to learn to efficiently control an on-ramp.
arXiv Detail & Related papers (2023-11-15T09:50:54Z) - Model Predictive Control with Gaussian-Process-Supported Dynamical
Constraints for Autonomous Vehicles [82.65261980827594]
We propose a model predictive control approach for autonomous vehicles that exploits learned Gaussian processes for predicting human driving behavior.
A multi-mode predictive control approach considers the possible intentions of the human drivers.
arXiv Detail & Related papers (2023-03-08T17:14:57Z) - Incorporating Recurrent Reinforcement Learning into Model Predictive
Control for Adaptive Control in Autonomous Driving [11.67417895998434]
Model Predictive Control (MPC) is attracting tremendous attention in the autonomous driving task as a powerful control technique.
In this paper, we reformulate the problem as a Partially Observed Markov Decision Process (POMDP)
We then learn a recurrent policy continually adapting the parameters of the dynamics model via Recurrent Reinforcement Learning (RRL) for optimal and adaptive control.
arXiv Detail & Related papers (2023-01-30T22:11:07Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Neural Lyapunov Differentiable Predictive Control [2.042924346801313]
We present a learning-based predictive control methodology using the differentiable programming framework with probabilistic Lyapunov-based stability guarantees.
In conjunction, our approach jointly learns a Lyapunov function that certifies the regions of state-space with stable dynamics.
arXiv Detail & Related papers (2022-05-22T03:52:27Z) - UMBRELLA: Uncertainty-Aware Model-Based Offline Reinforcement Learning
Leveraging Planning [1.1339580074756188]
Offline reinforcement learning (RL) provides a framework for learning decision-making from offline data.
Self-driving vehicles (SDV) learn a policy, which potentially even outperforms the behavior in the sub-optimal data set.
This motivates the use of model-based offline RL approaches, which leverage planning.
arXiv Detail & Related papers (2021-11-22T10:37:52Z) - Evaluating model-based planning and planner amortization for continuous
control [79.49319308600228]
We take a hybrid approach, combining model predictive control (MPC) with a learned model and model-free policy learning.
We find that well-tuned model-free agents are strong baselines even for high DoF control problems.
We show that it is possible to distil a model-based planner into a policy that amortizes the planning without any loss of performance.
arXiv Detail & Related papers (2021-10-07T12:00:40Z) - Model-Reference Reinforcement Learning for Collision-Free Tracking
Control of Autonomous Surface Vehicles [1.7033108359337459]
The proposed control algorithm combines a conventional control method with reinforcement learning to enhance control accuracy and intelligence.
Thanks to reinforcement learning, the overall tracking controller is capable of compensating for model uncertainties and achieving collision avoidance.
arXiv Detail & Related papers (2020-08-17T12:15:15Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.