From Flies to Robots: Inverted Landing in Small Quadcopters with Dynamic
Perching
- URL: http://arxiv.org/abs/2403.00128v1
- Date: Thu, 29 Feb 2024 21:09:08 GMT
- Title: From Flies to Robots: Inverted Landing in Small Quadcopters with Dynamic
Perching
- Authors: Bryan Habas, Bo Cheng
- Abstract summary: Inverted landing is a routine behavior among a number of animal fliers.
We develop a control policy general to arbitrary ceiling-approach conditions.
We successfully achieved a range of robust inverted-landing behaviors in small quadcopters.
- Score: 15.57055572401334
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inverted landing is a routine behavior among a number of animal fliers.
However, mastering this feat poses a considerable challenge for robotic fliers,
especially to perform dynamic perching with rapid body rotations (or flips) and
landing against gravity. Inverted landing in flies have suggested that optical
flow senses are closely linked to the precise triggering and control of body
flips that lead to a variety of successful landing behaviors. Building upon
this knowledge, we aimed to replicate the flies' landing behaviors in small
quadcopters by developing a control policy general to arbitrary
ceiling-approach conditions. First, we employed reinforcement learning in
simulation to optimize discrete sensory-motor pairs across a broad spectrum of
ceiling-approach velocities and directions. Next, we converted the
sensory-motor pairs to a two-stage control policy in a continuous
augmented-optical flow space. The control policy consists of a first-stage
Flip-Trigger Policy, which employs a one-class support vector machine, and a
second-stage Flip-Action Policy, implemented as a feed-forward neural network.
To transfer the inverted-landing policy to physical systems, we utilized domain
randomization and system identification techniques for a zero-shot sim-to-real
transfer. As a result, we successfully achieved a range of robust
inverted-landing behaviors in small quadcopters, emulating those observed in
flies.
Related papers
- Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Inverted Landing in a Small Aerial Robot via Deep Reinforcement Learning
for Triggering and Control of Rotational Maneuvers [11.29285364660789]
Inverted landing in a rapid and robust manner is a challenging feat for aerial robots, especially while depending entirely on onboard sensing and computation.
Previous work has identified a direct causal connection between a series of onboard visual cues and kinematic actions that allow for reliable execution of this challenging aerobatic maneuver in small aerial robots.
In this work, we first utilized Deep Reinforcement Learning and a physics-based simulation to obtain a general, optimal control policy for robust inverted landing.
arXiv Detail & Related papers (2022-09-22T14:38:10Z) - Learning a Single Near-hover Position Controller for Vastly Different
Quadcopters [56.37274861303324]
This paper proposes an adaptive near-hover position controller for quadcopters.
It can be deployed to quadcopters of very different mass, size and motor constants.
It also shows rapid adaptation to unknown disturbances during runtime.
arXiv Detail & Related papers (2022-09-19T17:55:05Z) - Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds [96.74836678572582]
We present a learning-based approach that allows rapid online adaptation by incorporating pretrained representations through deep learning.
Neural-Fly achieves precise flight control with substantially smaller tracking error than state-of-the-art nonlinear and adaptive controllers.
arXiv Detail & Related papers (2022-05-13T21:55:28Z) - VAE-Loco: Versatile Quadruped Locomotion by Learning a Disentangled Gait
Representation [78.92147339883137]
We show that it is pivotal in increasing controller robustness by learning a latent space capturing the key stance phases constituting a particular gait.
We demonstrate that specific properties of the drive signal map directly to gait parameters such as cadence, footstep height and full stance duration.
The use of a generative model facilitates the detection and mitigation of disturbances to provide a versatile and robust planning framework.
arXiv Detail & Related papers (2022-05-02T19:49:53Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z) - RLOC: Terrain-Aware Legged Locomotion using Reinforcement Learning and
Optimal Control [6.669503016190925]
We present a unified model-based and data-driven approach for quadrupedal planning and control.
We map sensory information and desired base velocity commands into footstep plans using a reinforcement learning policy.
We train and evaluate our framework on a complex quadrupedal system, ANYmal B, and demonstrate transferability to a larger and heavier robot, ANYmal C, without requiring retraining.
arXiv Detail & Related papers (2020-12-05T18:30:23Z) - Developmental Reinforcement Learning of Control Policy of a Quadcopter
UAV with Thrust Vectoring Rotors [1.0057838324294686]
We present a novel developmental reinforcement learning-based controller for a quadcopter with thrust vectoring capabilities.
The control policy of this robot is learned using the policy transfer from the learned controller of the quadcopter.
The performance of the learned policy is evaluated by physics-based simulations for the tasks of hovering and way-point navigation.
arXiv Detail & Related papers (2020-07-15T16:17:29Z) - First Steps: Latent-Space Control with Semantic Constraints for
Quadruped Locomotion [73.37945453998134]
Traditional approaches to quadruped control employ simplified, hand-derived models.
This significantly reduces the capability of the robot since its effective kinematic range is curtailed.
In this work, these challenges are addressed by framing quadruped control as optimisation in a structured latent space.
A deep generative model captures a statistical representation of feasible joint configurations, whilst complex dynamic and terminal constraints are expressed via high-level, semantic indicators.
We validate the feasibility of locomotion trajectories optimised using our approach both in simulation and on a real-worldmal quadruped.
arXiv Detail & Related papers (2020-07-03T07:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.