Aerobatic maneuvers in insect-scale flapping-wing aerial robots via deep-learned robust tube model predictive control
- URL: http://arxiv.org/abs/2508.03043v1
- Date: Tue, 05 Aug 2025 03:40:11 GMT
- Title: Aerobatic maneuvers in insect-scale flapping-wing aerial robots via deep-learned robust tube model predictive control
- Authors: Yi-Hsuan Hsiao, Andrea Tagliabue, Owen Matteson, Suhan Kim, Tong Zhao, Jonathan P. How, YuFeng Chen,
- Abstract summary: Aerial insects exhibit highly agile maneuvers such as sharp braking, saccades, and body flips under disturbance.<n>We demonstrate insect-like saccade movements with lateral speed and acceleration of 197 centimeters per second and 11.7 meters per second square.<n>The robot can also perform saccade maneuvers under 160 centimeters per second wind disturbance and large command-to-force mapping errors.
- Score: 38.02123507620609
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Aerial insects exhibit highly agile maneuvers such as sharp braking, saccades, and body flips under disturbance. In contrast, insect-scale aerial robots are limited to tracking non-aggressive trajectories with small body acceleration. This performance gap is contributed by a combination of low robot inertia, fast dynamics, uncertainty in flapping-wing aerodynamics, and high susceptibility to environmental disturbance. Executing highly dynamic maneuvers requires the generation of aggressive flight trajectories that push against the hardware limit and a high-rate feedback controller that accounts for model and environmental uncertainty. Here, through designing a deep-learned robust tube model predictive controller, we showcase insect-like flight agility and robustness in a 750-millgram flapping-wing robot. Our model predictive controller can track aggressive flight trajectories under disturbance. To achieve a high feedback rate in a compute-constrained real-time system, we design imitation learning methods to train a two-layer, fully connected neural network, which resembles insect flight control architecture consisting of central nervous system and motor neurons. Our robot demonstrates insect-like saccade movements with lateral speed and acceleration of 197 centimeters per second and 11.7 meters per second square, representing 447$\%$ and 255$\%$ improvement over prior results. The robot can also perform saccade maneuvers under 160 centimeters per second wind disturbance and large command-to-force mapping errors. Furthermore, it performs 10 consecutive body flips in 11 seconds - the most challenging maneuver among sub-gram flyers. These results represent a milestone in achieving insect-scale flight agility and inspire future investigations on sensing and compute autonomy.
Related papers
- Learning Aerodynamics for the Control of Flying Humanoid Robots [11.791887356425491]
Flying humanoid robots face challenges in modeling and control, particularly with aerodynamic forces.<n>The technological contribution includes the mechanical design of iRonCub-Mk1, a jet-powered humanoid robot.<n>The scientific contribution offers a comprehensive approach to model and control aerodynamic forces using classical and learning techniques.
arXiv Detail & Related papers (2025-05-30T23:27:44Z) - Humanoid Whole-Body Locomotion on Narrow Terrain via Dynamic Balance and Reinforcement Learning [54.26816599309778]
We propose a novel whole-body locomotion algorithm based on dynamic balance and Reinforcement Learning (RL)<n> Specifically, we introduce a dynamic balance mechanism by leveraging an extended measure of Zero-Moment Point (ZMP)-driven rewards and task-driven rewards in a whole-body actor-critic framework.<n> Experiments conducted on a full-sized Unitree H1-2 robot verify the ability of our method to maintain balance on extremely narrow terrains.
arXiv Detail & Related papers (2025-02-24T14:53:45Z) - Hovering Flight of Soft-Actuated Insect-Scale Micro Aerial Vehicles using Deep Reinforcement Learning [25.353235604712562]
Soft-actuated insect-scale micro aerial vehicles (IMAVs) pose unique challenges for designing robust and computationally efficient controllers.<n>Here, we design a deep reinforcement learning (RL) controller that addresses system delay and uncertainties.<n>We deploy this controller on two different insect-scale aerial robots that weigh 720 mg and 850 mg, respectively.
arXiv Detail & Related papers (2025-02-17T22:45:59Z) - Impedance Matching: Enabling an RL-Based Running Jump in a Quadruped Robot [7.516046071926082]
We propose a new framework to mitigate the gap between simulated and real robots.
Our framework offers a structured guideline for parameter selection and the range for dynamics randomization in simulation.
Results are, to the best of our knowledge, one of the highest and longest running jumps demonstrated by an RL-based control policy in a real quadruped robot.
arXiv Detail & Related papers (2024-04-23T14:52:09Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Inverted Landing in a Small Aerial Robot via Deep Reinforcement Learning
for Triggering and Control of Rotational Maneuvers [11.29285364660789]
Inverted landing in a rapid and robust manner is a challenging feat for aerial robots, especially while depending entirely on onboard sensing and computation.
Previous work has identified a direct causal connection between a series of onboard visual cues and kinematic actions that allow for reliable execution of this challenging aerobatic maneuver in small aerial robots.
In this work, we first utilized Deep Reinforcement Learning and a physics-based simulation to obtain a general, optimal control policy for robust inverted landing.
arXiv Detail & Related papers (2022-09-22T14:38:10Z) - Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds [96.74836678572582]
We present a learning-based approach that allows rapid online adaptation by incorporating pretrained representations through deep learning.
Neural-Fly achieves precise flight control with substantially smaller tracking error than state-of-the-art nonlinear and adaptive controllers.
arXiv Detail & Related papers (2022-05-13T21:55:28Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z) - Evolved Neuromorphic Control for High Speed Divergence-based Landings of
MAVs [0.0]
We develop spiking neural networks for controlling landings of micro air vehicles.
We demonstrate that the resulting neuromorphic controllers transfer robustly from a simulation to the real world.
To the best of our knowledge, this work is the first to integrate spiking neural networks in the control loop of a real-world flying robot.
arXiv Detail & Related papers (2020-03-06T10:19:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.