DATT: Deep Adaptive Trajectory Tracking for Quadrotor Control
- URL: http://arxiv.org/abs/2310.09053v3
- Date: Wed, 13 Dec 2023 09:46:25 GMT
- Title: DATT: Deep Adaptive Trajectory Tracking for Quadrotor Control
- Authors: Kevin Huang, Rwik Rana, Alexander Spitzer, Guanya Shi, Byron Boots
- Abstract summary: Deep Adaptive Trajectory Tracking (DATT) is a learning-based approach that can precisely track arbitrary, potentially infeasible trajectories in the presence of large disturbances in the real world.
DATT significantly outperforms competitive adaptive nonlinear and model predictive controllers for both feasible smooth and infeasible trajectories in unsteady wind fields.
It can efficiently run online with an inference time less than 3.2 ms, less than 1/4 of the adaptive nonlinear model predictive control baseline.
- Score: 62.24301794794304
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Precise arbitrary trajectory tracking for quadrotors is challenging due to
unknown nonlinear dynamics, trajectory infeasibility, and actuation limits. To
tackle these challenges, we present Deep Adaptive Trajectory Tracking (DATT), a
learning-based approach that can precisely track arbitrary, potentially
infeasible trajectories in the presence of large disturbances in the real
world. DATT builds on a novel feedforward-feedback-adaptive control structure
trained in simulation using reinforcement learning. When deployed on real
hardware, DATT is augmented with a disturbance estimator using L1 adaptive
control in closed-loop, without any fine-tuning. DATT significantly outperforms
competitive adaptive nonlinear and model predictive controllers for both
feasible smooth and infeasible trajectories in unsteady wind fields, including
challenging scenarios where baselines completely fail. Moreover, DATT can
efficiently run online with an inference time less than 3.2 ms, less than 1/4
of the adaptive nonlinear model predictive control baseline
Related papers
- Custom Non-Linear Model Predictive Control for Obstacle Avoidance in Indoor and Outdoor Environments [0.0]
This paper introduces a Non-linear Model Predictive Control (NMPC) framework for the DJI Matrice 100.
The framework supports various trajectory types and employs a penalty-based cost function for control accuracy in tight maneuvers.
arXiv Detail & Related papers (2024-10-03T17:50:19Z) - A Tricycle Model to Accurately Control an Autonomous Racecar with Locked
Differential [71.53284767149685]
We present a novel formulation to model the effects of a locked differential on the lateral dynamics of an autonomous open-wheel racecar.
We include a micro-steps discretization approach to accurately linearize the dynamics and produce a prediction suitable for real-time implementation.
arXiv Detail & Related papers (2023-12-22T16:29:55Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Real-Time Model-Free Deep Reinforcement Learning for Force Control of a
Series Elastic Actuator [56.11574814802912]
State-of-the art robotic applications utilize series elastic actuators (SEAs) with closed-loop force control to achieve complex tasks such as walking, lifting, and manipulation.
Model-free PID control methods are more prone to instability due to nonlinearities in the SEA.
Deep reinforcement learning has proved to be an effective model-free method for continuous control tasks.
arXiv Detail & Related papers (2023-04-11T00:51:47Z) - Control-oriented meta-learning [25.316358215670274]
We use data-driven modeling with neural networks to learn, offline from past data, an adaptive controller with an internal parametric model of nonlinear features.
We meta-learn the adaptive controller with closed-loop tracking simulation as the base-learner and the average tracking error as the meta-objective.
arXiv Detail & Related papers (2022-04-14T03:02:27Z) - Data-Efficient Deep Reinforcement Learning for Attitude Control of
Fixed-Wing UAVs: Field Experiments [0.37798600249187286]
We show that DRL can successfully learn to perform attitude control of a fixed-wing UAV operating directly on the original nonlinear dynamics.
We deploy the learned controller on the UAV in flight tests, demonstrating comparable performance to the state-of-the-art ArduPlane proportional-integral-derivative (PID) attitude controller.
arXiv Detail & Related papers (2021-11-07T19:07:46Z) - Learning Adaptive Control for SE(3) Hamiltonian Dynamics [15.26733033527393]
This paper develops adaptive geometric control for rigid-body systems, such as ground, aerial, and underwater vehicles.
We learn a Hamiltonian model of the system dynamics using a neural ordinary differential equation network trained from state-control trajectory data.
In the second stage, we design a trajectory tracking controller with disturbance compensation from an energy-based perspective.
arXiv Detail & Related papers (2021-09-21T05:54:28Z) - Adaptive-Control-Oriented Meta-Learning for Nonlinear Systems [29.579737941918022]
We learn, offline from past data, an adaptive controller with an internal parametric model of nonlinear features.
We meta-learn the adaptive controller with closed-loop tracking simulation as the base-learner and the average tracking error as the meta-objective.
With a nonlinear planar rotorcraft subject to wind, we demonstrate that our adaptive controller outperforms other controllers trained with regression-oriented meta-learning.
arXiv Detail & Related papers (2021-03-07T23:49:59Z) - Logarithmic Regret Bound in Partially Observable Linear Dynamical
Systems [91.43582419264763]
We study the problem of system identification and adaptive control in partially observable linear dynamical systems.
We present the first model estimation method with finite-time guarantees in both open and closed-loop system identification.
We show that AdaptOn is the first algorithm that achieves $textpolylogleft(Tright)$ regret in adaptive control of unknown partially observable linear dynamical systems.
arXiv Detail & Related papers (2020-03-25T06:00:33Z) - Guided Constrained Policy Optimization for Dynamic Quadrupedal Robot
Locomotion [78.46388769788405]
We introduce guided constrained policy optimization (GCPO), an RL framework based upon our implementation of constrained policy optimization (CPPO)
We show that guided constrained RL offers faster convergence close to the desired optimum resulting in an optimal, yet physically feasible, robotic control behavior without the need for precise reward function tuning.
arXiv Detail & Related papers (2020-02-22T10:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.