FalconWing: An Open-Source Platform for Ultra-Light Fixed-Wing Aircraft Research
- URL: http://arxiv.org/abs/2505.01383v1
- Date: Fri, 02 May 2025 16:47:05 GMT
- Title: FalconWing: An Open-Source Platform for Ultra-Light Fixed-Wing Aircraft Research
- Authors: Yan Miao, Will Shen, Hang Cui, Sayan Mitra,
- Abstract summary: FalconWing is an open-source, ultra-lightweight (150 g) fixed-wing platform for autonomy research.<n>We develop and deploy a vision-based control policy for autonomous landing using a novel real-to-sim-to-real learning approach.<n>When deployed zero-shot on the hardware platform, this policy achieves an 80% success rate in vision-based autonomous landings.
- Score: 2.823704956886882
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present FalconWing -- an open-source, ultra-lightweight (150 g) fixed-wing platform for autonomy research. The hardware platform integrates a small camera, a standard airframe, offboard computation, and radio communication for manual overrides. We demonstrate FalconWing's capabilities by developing and deploying a purely vision-based control policy for autonomous landing (without IMU or motion capture) using a novel real-to-sim-to-real learning approach. Our learning approach: (1) constructs a photorealistic simulation environment via 3D Gaussian splatting trained on real-world images; (2) identifies nonlinear dynamics from vision-estimated real-flight data; and (3) trains a multi-modal Vision Transformer (ViT) policy through simulation-only imitation learning. The ViT architecture fuses single RGB image with the history of control actions via self-attention, preserving temporal context while maintaining real-time 20 Hz inference. When deployed zero-shot on the hardware platform, this policy achieves an 80% success rate in vision-based autonomous landings. Together with the hardware specifications, we also open-source the system dynamics, the software for photorealistic simulator and the learning approach.
Related papers
- AirSim360: A Panoramic Simulation Platform within Drone View [63.238263531772446]
AirSim360 is a simulation platform for omnidirectional data from aerial viewpoints.<n>AirSim360 focuses on three key aspects: a render-aligned data and labeling paradigm for pixel-level geometric, semantic, and entity-level understanding.<n>Unlike existing simulators, our work is the first to systematically model the 4D real world under an omnidirectional setting.
arXiv Detail & Related papers (2025-12-01T18:59:30Z) - cVLA: Towards Efficient Camera-Space VLAs [26.781510474119845]
Vision-Language-Action (VLA) models offer a compelling framework for tackling complex robotic manipulation tasks.<n>We propose a novel VLA approach that leverages the competitive performance of Vision Language Models on 2D images.<n>Our model predicts trajectory waypoints, making it both more efficient to train and robot embodiment.
arXiv Detail & Related papers (2025-07-02T22:56:41Z) - VizFlyt: Perception-centric Pedagogical Framework For Autonomous Aerial Robots [5.669075778114126]
We present VizFlyt, an open-source perception-centric Hardware-In-The-Loop (HITL) photorealistic testing framework for aerial robotics courses.<n>We utilize pose from an external localization system to hallucinate real-time and photorealistic visual sensors using 3D Gaussian Splatting.<n>This enables stress-free testing of autonomy algorithms on aerial robots without the risk of crashing into obstacles.
arXiv Detail & Related papers (2025-03-28T21:03:30Z) - OpenFly: A Comprehensive Platform for Aerial Vision-Language Navigation [49.697035403548966]
Vision-Language Navigation (VLN) aims to guide agents by leveraging language instructions and visual cues, playing a pivotal role in embodied AI.<n>We propose OpenFly, a platform comprising various rendering engines, a versatile toolchain, and a large-scale benchmark for aerial VLN.<n>We construct a large-scale aerial VLN dataset with 100k trajectories, covering diverse heights and lengths across 18 scenes.
arXiv Detail & Related papers (2025-02-25T09:57:18Z) - VR-Robo: A Real-to-Sim-to-Real Framework for Visual Robot Navigation and Locomotion [25.440573256776133]
This paper presents a Real-to-Sim-to-Real framework that generates and physically interactive "digital twin" simulation environments for visual navigation and locomotion learning.
arXiv Detail & Related papers (2025-02-03T17:15:05Z) - SOUS VIDE: Cooking Visual Drone Navigation Policies in a Gaussian Splatting Vacuum [8.410894757762346]
SOUS VIDE is a simulator, training approach, and policy architecture for end-to-end visual drone navigation.<n>Our policies exhibit zero-shot sim-to-real transfer with robust real-world performance using only onboard perception and computation.
arXiv Detail & Related papers (2024-12-20T21:13:11Z) - Open-World Drone Active Tracking with Goal-Centered Rewards [62.21394499788672]
Drone Visual Active Tracking aims to autonomously follow a target object by controlling the motion system based on visual observations.<n>We propose DAT, the first open-world drone active air-to-ground tracking benchmark.<n>We also propose GC-VAT, which aims to improve the performance of drone tracking targets in complex scenarios.
arXiv Detail & Related papers (2024-12-01T09:37:46Z) - Learning autonomous driving from aerial imagery [67.06858775696453]
Photogrammetric simulators allow the synthesis of novel views through the transformation of pre-generated assets into novel views.
We use a Neural Radiance Field (NeRF) as an intermediate representation to synthesize novel views from the point of view of a ground vehicle.
arXiv Detail & Related papers (2024-10-18T05:09:07Z) - Gaussian Splatting to Real World Flight Navigation Transfer with Liquid Networks [93.38375271826202]
We present a method to improve generalization and robustness to distribution shifts in sim-to-real visual quadrotor navigation tasks.
We first build a simulator by integrating Gaussian splatting with quadrotor flight dynamics, and then, train robust navigation policies using Liquid neural networks.
In this way, we obtain a full-stack imitation learning protocol that combines advances in 3D Gaussian splatting radiance field rendering, programming of expert demonstration training data, and the task understanding capabilities of Liquid networks.
arXiv Detail & Related papers (2024-06-21T13:48:37Z) - DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning [61.10299147201369]
This paper introduces a novel autonomous RL approach, called DigiRL, for training in-the-wild device control agents.
We build a scalable and parallelizable Android learning environment equipped with a VLM-based evaluator.
We demonstrate the effectiveness of DigiRL using the Android-in-the-Wild dataset, where our 1.3B VLM trained with RL achieves a 49.5% absolute improvement.
arXiv Detail & Related papers (2024-06-14T17:49:55Z) - Drive Anywhere: Generalizable End-to-end Autonomous Driving with
Multi-modal Foundation Models [114.69732301904419]
We present an approach to apply end-to-end open-set (any environment/scene) autonomous driving that is capable of providing driving decisions from representations queryable by image and text.
Our approach demonstrates unparalleled results in diverse tests while achieving significantly greater robustness in out-of-distribution situations.
arXiv Detail & Related papers (2023-10-26T17:56:35Z) - Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone
Racing [52.50284630866713]
Existing systems often require hand-engineered components for state estimation, planning, and control.
This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
arXiv Detail & Related papers (2022-10-26T19:03:17Z) - Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving
Without Real Data [56.49494318285391]
We present Sim2Seg, a re-imagining of RCAN that crosses the visual reality gap for off-road autonomous driving.
This is done by learning to translate randomized simulation images into simulated segmentation and depth maps.
This allows us to train an end-to-end RL policy in simulation, and directly deploy in the real-world.
arXiv Detail & Related papers (2022-10-25T17:50:36Z) - Harfang3D Dog-Fight Sandbox: A Reinforcement Learning Research Platform
for the Customized Control Tasks of Fighter Aircrafts [0.0]
We present a semi-realistic flight simulation environment Harfang3D Dog-Fight Sandbox for fighter aircrafts.
It is aimed to be a flexible toolbox for the investigation of main challenges in aviation studies using Reinforcement Learning.
Software also allows deployment of bot aircrafts and development of multi-agent tasks.
arXiv Detail & Related papers (2022-10-13T18:18:09Z) - Towards a Fully Autonomous UAV Controller for Moving Platform Detection
and Landing [2.7909470193274593]
We present an autonomous UAV landing system for landing on a moving platform.
The proposed system relies only on the camera sensor, and has been designed as lightweight as possible.
The system was evaluated with an average deviation of 15cm from the center of the target, for 40 landing attempts.
arXiv Detail & Related papers (2022-09-30T09:16:04Z) - VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles [131.2240621036954]
We present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles.
Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras.
We demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle.
arXiv Detail & Related papers (2021-11-23T18:58:10Z) - AirSim Drone Racing Lab [56.68291351736057]
AirSim Drone Racing Lab is a simulation framework for enabling machine learning research in this domain.
Our framework enables generation of racing tracks in multiple photo-realistic environments.
We used our framework to host a simulation based drone racing competition at NeurIPS 2019.
arXiv Detail & Related papers (2020-03-12T08:06:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.