AutoTune: Controller Tuning for High-Speed Flight
- URL: http://arxiv.org/abs/2103.10698v1
- Date: Fri, 19 Mar 2021 09:12:51 GMT
- Title: AutoTune: Controller Tuning for High-Speed Flight
- Authors: Antonio Loquercio, Alessandro Saviolo, Davide Scaramuzza
- Abstract summary: How sensitive are controllers to tuning when tracking high-speed maneuvers?
What algorithms can we use to automatically tune them?
We propose AutoTune, a sampling-based tuning algorithm specifically tailored to high-speed flight.
- Score: 117.69289575486246
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to noisy actuation and external disturbances, tuning controllers for
high-speed flight is very challenging. In this paper, we ask the following
questions: How sensitive are controllers to tuning when tracking high-speed
maneuvers? What algorithms can we use to automatically tune them? To answer the
first question, we study the relationship between parameters and performance
and find out that the faster the maneuver, the more sensitive a controller
becomes to its parameters. To answer the second question, we review existing
methods for controller tuning and discover that prior works often perform
poorly on the task of high-speed flight. Therefore, we propose AutoTune, a
sampling-based tuning algorithm specifically tailored to high-speed flight. In
contrast to previous work, our algorithm does not assume any prior knowledge of
the drone or its optimization function and can deal with the multi-modal
characteristics of the parameters' optimization space. We thoroughly evaluate
AutoTune both in simulation and in the physical world. In our experiments, we
outperform existing tuning algorithms by up to 90\% in trajectory completion.
The resulting controllers are tested in the AirSim Game of Drones competition,
where we outperform the winner by up to 25\% in lap-time. Finally, we show that
AutoTune improves tracking error when flying a physical platform with respect
to parameters tuned by a human expert.
Related papers
- Autotuning Bipedal Locomotion MPC with GRFM-Net for Efficient Sim-to-Real Transfer [10.52309107195141]
We address the challenges of parameter selection in bipedal locomotion control using DiffTune.
A major difficulty lies in balancing model fidelity with differentiability.
We validate the parameters learned by DiffTune with GRFM-Net in hardware experiments.
arXiv Detail & Related papers (2024-09-24T03:58:18Z) - Reaching the Limit in Autonomous Racing: Optimal Control versus
Reinforcement Learning [66.10854214036605]
A central question in robotics is how to design a control system for an agile mobile robot.
We show that a neural network controller trained with reinforcement learning (RL) outperformed optimal control (OC) methods in this setting.
Our findings allowed us to push an agile drone to its maximum performance, achieving a peak acceleration greater than 12 times the gravitational acceleration and a peak velocity of 108 kilometers per hour.
arXiv Detail & Related papers (2023-10-17T02:40:27Z) - PTP: Boosting Stability and Performance of Prompt Tuning with
Perturbation-Based Regularizer [94.23904400441957]
We introduce perturbation-based regularizers, which can smooth the loss landscape, into prompt tuning.
We design two kinds of perturbation-based regularizers, including random-noise-based and adversarial-based.
Our new algorithms improve the state-of-the-art prompt tuning methods by 1.94% and 2.34% on SuperGLUE and FewGLUE benchmarks, respectively.
arXiv Detail & Related papers (2023-05-03T20:30:51Z) - Hyper-Parameter Auto-Tuning for Sparse Bayesian Learning [72.83293818245978]
We design and learn a neural network (NN)-based auto-tuner for hyper- parameter tuning in sparse Bayesian learning.
We show that considerable improvement in convergence rate and recovery performance can be achieved.
arXiv Detail & Related papers (2022-11-09T12:34:59Z) - Designing a Robust Low-Level Agnostic Controller for a Quadrotor with
Actor-Critic Reinforcement Learning [0.38073142980732994]
We introduce domain randomization during the training phase of a low-level waypoint guidance controller based on Soft Actor-Critic.
We show that, by introducing a certain degree of uncertainty in quadrotor dynamics during training, we can obtain a controller that is capable to perform the proposed task using a larger variation of quadrotor parameters.
arXiv Detail & Related papers (2022-10-06T14:58:19Z) - Performance-Driven Controller Tuning via Derivative-Free Reinforcement
Learning [6.5158195776494]
We tackle the controller tuning problem using a novel derivative-free reinforcement learning framework.
We conduct numerical experiments on two concrete examples from autonomous driving, namely, adaptive cruise control with PID controller and trajectory tracking with MPC controller.
Experimental results show that the proposed method outperforms popular baselines and highlight its strong potential for controller tuning.
arXiv Detail & Related papers (2022-09-11T13:01:14Z) - On Controller Tuning with Time-Varying Bayesian Optimization [74.57758188038375]
We will use time-varying optimization (TVBO) to tune controllers online in changing environments using appropriate prior knowledge on the control objective and its changes.
We propose a novel TVBO strategy using Uncertainty-Injection (UI), which incorporates the assumption of incremental and lasting changes.
Our model outperforms the state-of-the-art method in TVBO, exhibiting reduced regret and fewer unstable parameter configurations.
arXiv Detail & Related papers (2022-07-22T14:54:13Z) - An Adaptive PID Autotuner for Multicopters with Experimental Results [0.0]
The autotuner consists of adaptive digital control laws based on retrospective cost adaptive control implemented in the PX4 flight stack.
It is observed that the autotuned autopilot outperforms the default autopilot.
arXiv Detail & Related papers (2021-09-27T04:59:48Z) - Amortized Auto-Tuning: Cost-Efficient Transfer Optimization for
Hyperparameter Recommendation [83.85021205445662]
We propose an instantiation--amortized auto-tuning (AT2) to speed up tuning of machine learning models.
We conduct a thorough analysis of the multi-task multi-fidelity Bayesian optimization framework, which leads to the best instantiation--amortized auto-tuning (AT2)
arXiv Detail & Related papers (2021-06-17T00:01:18Z) - Using hardware performance counters to speed up autotuning convergence
on GPUs [0.0]
We introduce a novel method for searching tuning spaces.
The method takes advantage of collecting hardware performance counters during empirical tuning.
We experimentally demonstrate that our method can speed up autotuning when an application needs to be ported to different hardware or when it needs to process data with different characteristics.
arXiv Detail & Related papers (2021-02-10T07:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.