A Learning-Based Tune-Free Control Framework for Large Scale Autonomous
Driving System Deployment
- URL: http://arxiv.org/abs/2011.04250v1
- Date: Mon, 9 Nov 2020 08:54:36 GMT
- Title: A Learning-Based Tune-Free Control Framework for Large Scale Autonomous
Driving System Deployment
- Authors: Yu Wang, Shu Jiang, Weiman Lin, Yu Cao, Longtao Lin, Jiangtao Hu,
Jinghao Miao and Qi Luo
- Abstract summary: The framework consists of three machine-learning-based procedures, which jointly automate the control parameter tuning for autonomous driving.
The paper shows an improvement in control performance with a significant increase in parameter tuning efficiency, in both simulation and road tests.
- Score: 5.296964852594282
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents the design of a tune-free (human-out-of-the-loop
parameter tuning) control framework, aiming at accelerating large scale
autonomous driving system deployed on various vehicles and driving
environments. The framework consists of three machine-learning-based
procedures, which jointly automate the control parameter tuning for autonomous
driving, including: a learning-based dynamic modeling procedure, to enable the
control-in-the-loop simulation with highly accurate vehicle dynamics for
parameter tuning; a learning-based open-loop mapping procedure, to solve the
feedforward control parameters tuning; and more significantly, a
Bayesian-optimization-based closed-loop parameter tuning procedure, to
automatically tune feedback control (PID, LQR, MRAC, MPC, etc.) parameters in
simulation environment. The paper shows an improvement in control performance
with a significant increase in parameter tuning efficiency, in both simulation
and road tests. This framework has been validated on different vehicles in US
and China.
Related papers
- Autotuning Bipedal Locomotion MPC with GRFM-Net for Efficient Sim-to-Real Transfer [10.52309107195141]
We address the challenges of parameter selection in bipedal locomotion control using DiffTune.
A major difficulty lies in balancing model fidelity with differentiability.
We validate the parameters learned by DiffTune with GRFM-Net in hardware experiments.
arXiv Detail & Related papers (2024-09-24T03:58:18Z) - A Tricycle Model to Accurately Control an Autonomous Racecar with Locked
Differential [71.53284767149685]
We present a novel formulation to model the effects of a locked differential on the lateral dynamics of an autonomous open-wheel racecar.
We include a micro-steps discretization approach to accurately linearize the dynamics and produce a prediction suitable for real-time implementation.
arXiv Detail & Related papers (2023-12-22T16:29:55Z) - Tuning Legged Locomotion Controllers via Safe Bayesian Optimization [47.87675010450171]
This paper presents a data-driven strategy to streamline the deployment of model-based controllers in legged robotic hardware platforms.
We leverage a model-free safe learning algorithm to automate the tuning of control gains, addressing the mismatch between the simplified model used in the control formulation and the real system.
arXiv Detail & Related papers (2023-06-12T13:10:14Z) - AutoRL Hyperparameter Landscapes [69.15927869840918]
Reinforcement Learning (RL) has shown to be capable of producing impressive results, but its use is limited by the impact of its hyperparameters on performance.
We propose an approach to build and analyze these hyperparameter landscapes not just for one point in time but at multiple points in time throughout training.
This supports the theory that hyperparameters should be dynamically adjusted during training and shows the potential for more insights on AutoRL problems that can be gained through landscape analyses.
arXiv Detail & Related papers (2023-04-05T12:14:41Z) - Performance-Driven Controller Tuning via Derivative-Free Reinforcement
Learning [6.5158195776494]
We tackle the controller tuning problem using a novel derivative-free reinforcement learning framework.
We conduct numerical experiments on two concrete examples from autonomous driving, namely, adaptive cruise control with PID controller and trajectory tracking with MPC controller.
Experimental results show that the proposed method outperforms popular baselines and highlight its strong potential for controller tuning.
arXiv Detail & Related papers (2022-09-11T13:01:14Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Bayesian Optimization Meets Hybrid Zero Dynamics: Safe Parameter
Learning for Bipedal Locomotion Control [17.37169551675587]
We propose a multi-domain control parameter learning framework for locomotion control of bipedal robots.
We leverage BO to learn the control parameters used in the HZD-based controller.
Next, the learning process is applied on the physical robot to learn for corrections to the control parameters learned in simulation.
arXiv Detail & Related papers (2022-03-04T20:48:17Z) - Policy Search for Model Predictive Control with Application to Agile
Drone Flight [56.24908013905407]
We propose a policy-search-for-model-predictive-control framework for MPC.
Specifically, we formulate the MPC as a parameterized controller, where the hard-to-optimize decision variables are represented as high-level policies.
Experiments show that our controller achieves robust and real-time control performance in both simulation and the real world.
arXiv Detail & Related papers (2021-12-07T17:39:24Z) - Automated Controller Calibration by Kalman Filtering [2.2237337682863125]
The proposed method can be applied to a wide range of controllers.
The method tunes the parameters online and robustly, is computationally efficient, has low data storage requirements, and is easy to implement.
A simulation study with the high-fidelity vehicle simulator CarSim shows that the method can calibrate controllers of a complex dynamical system online.
arXiv Detail & Related papers (2021-11-21T14:57:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.