Self-adaptive Torque Vectoring Controller Using Reinforcement Learning
- URL: http://arxiv.org/abs/2103.14892v1
- Date: Sat, 27 Mar 2021 12:39:56 GMT
- Title: Self-adaptive Torque Vectoring Controller Using Reinforcement Learning
- Authors: Shayan Taherian, Sampo Kuutti, Marco Visca and Saber Fallah
- Abstract summary: Continuous direct yaw moment control systems such as torque-vectoring controller are an essential part for vehicle stabilization.
The ability of careful tuning of the parameters in a torque-vectoring controller can significantly enhance vehicle's performance and stability.
The utility of Reinforcement Learning (RL) based on Deep Deterministic Policy Gradient (DDPG) as a parameter tuning algorithm for torque-vectoring controller is presented.
- Score: 6.8390297905731625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continuous direct yaw moment control systems such as torque-vectoring
controller are an essential part for vehicle stabilization. This controller has
been extensively researched with the central objective of maintaining the
vehicle stability by providing consistent stable cornering response. The
ability of careful tuning of the parameters in a torque-vectoring controller
can significantly enhance vehicle's performance and stability. However, without
any re-tuning of the parameters, especially in extreme driving conditions e.g.
low friction surface or high velocity, the vehicle fails to maintain the
stability. In this paper, the utility of Reinforcement Learning (RL) based on
Deep Deterministic Policy Gradient (DDPG) as a parameter tuning algorithm for
torque-vectoring controller is presented. It is shown that, torque-vectoring
controller with parameter tuning via reinforcement learning performs well on a
range of different driving environment e.g., wide range of friction conditions
and different velocities, which highlight the advantages of reinforcement
learning as an adaptive algorithm for parameter tuning. Moreover, the
robustness of DDPG algorithm are validated under scenarios which are beyond the
training environment of the reinforcement learning algorithm. The simulation
has been carried out using a four wheels vehicle model with nonlinear tire
characteristics. We compare our DDPG based parameter tuning against a genetic
algorithm and a conventional trial-and-error tunning of the torque vectoring
controller, and the results demonstrated that the reinforcement learning based
parameter tuning significantly improves the stability of the vehicle.
Related papers
- PID Tuning using Cross-Entropy Deep Learning: a Lyapunov Stability Analysis [1.2499537119440245]
This work proposes experiments and metrics to empirically study the stability of such a controller.
We perform this stability analysis on a LB adaptive control system whose adaptive parameters are determined using a Cross-Entropy Deep Learning method.
arXiv Detail & Related papers (2024-04-18T09:22:08Z) - A Tricycle Model to Accurately Control an Autonomous Racecar with Locked
Differential [71.53284767149685]
We present a novel formulation to model the effects of a locked differential on the lateral dynamics of an autonomous open-wheel racecar.
We include a micro-steps discretization approach to accurately linearize the dynamics and produce a prediction suitable for real-time implementation.
arXiv Detail & Related papers (2023-12-22T16:29:55Z) - RACER: Rational Artificial Intelligence Car-following-model Enhanced by
Reality [51.244807332133696]
This paper introduces RACER, a cutting-edge deep learning car-following model to predict Adaptive Cruise Control (ACC) driving behavior.
Unlike conventional models, RACER effectively integrates Rational Driving Constraints (RDCs), crucial tenets of actual driving.
RACER excels across key metrics, such as acceleration, velocity, and spacing, registering zero violations.
arXiv Detail & Related papers (2023-12-12T06:21:30Z) - Partial End-to-end Reinforcement Learning for Robustness Against Modelling Error in Autonomous Racing [0.0]
This paper addresses the issue of increasing the performance of reinforcement learning (RL) solutions for autonomous racing cars.
We propose a partial end-to-end algorithm that decouples the planning and control tasks.
By leveraging the robustness of a classical controller, our partial end-to-end driving algorithm exhibits better robustness towards model mismatches than standard end-to-end algorithms.
arXiv Detail & Related papers (2023-12-11T14:27:10Z) - Real-Time Model-Free Deep Reinforcement Learning for Force Control of a
Series Elastic Actuator [56.11574814802912]
State-of-the art robotic applications utilize series elastic actuators (SEAs) with closed-loop force control to achieve complex tasks such as walking, lifting, and manipulation.
Model-free PID control methods are more prone to instability due to nonlinearities in the SEA.
Deep reinforcement learning has proved to be an effective model-free method for continuous control tasks.
arXiv Detail & Related papers (2023-04-11T00:51:47Z) - Designing a Robust Low-Level Agnostic Controller for a Quadrotor with
Actor-Critic Reinforcement Learning [0.38073142980732994]
We introduce domain randomization during the training phase of a low-level waypoint guidance controller based on Soft Actor-Critic.
We show that, by introducing a certain degree of uncertainty in quadrotor dynamics during training, we can obtain a controller that is capable to perform the proposed task using a larger variation of quadrotor parameters.
arXiv Detail & Related papers (2022-10-06T14:58:19Z) - Performance-Driven Controller Tuning via Derivative-Free Reinforcement
Learning [6.5158195776494]
We tackle the controller tuning problem using a novel derivative-free reinforcement learning framework.
We conduct numerical experiments on two concrete examples from autonomous driving, namely, adaptive cruise control with PID controller and trajectory tracking with MPC controller.
Experimental results show that the proposed method outperforms popular baselines and highlight its strong potential for controller tuning.
arXiv Detail & Related papers (2022-09-11T13:01:14Z) - Improving the Performance of Robust Control through Event-Triggered
Learning [74.57758188038375]
We propose an event-triggered learning algorithm that decides when to learn in the face of uncertainty in the LQR problem.
We demonstrate improved performance over a robust controller baseline in a numerical example.
arXiv Detail & Related papers (2022-07-28T17:36:37Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Model-Reference Reinforcement Learning for Collision-Free Tracking
Control of Autonomous Surface Vehicles [1.7033108359337459]
The proposed control algorithm combines a conventional control method with reinforcement learning to enhance control accuracy and intelligence.
Thanks to reinforcement learning, the overall tracking controller is capable of compensating for model uncertainties and achieving collision avoidance.
arXiv Detail & Related papers (2020-08-17T12:15:15Z) - Tracking Performance of Online Stochastic Learners [57.14673504239551]
Online algorithms are popular in large-scale learning settings due to their ability to compute updates on the fly, without the need to store and process data in large batches.
When a constant step-size is used, these algorithms also have the ability to adapt to drifts in problem parameters, such as data or model properties, and track the optimal solution with reasonable accuracy.
We establish a link between steady-state performance derived under stationarity assumptions and the tracking performance of online learners under random walk models.
arXiv Detail & Related papers (2020-04-04T14:16:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.