Deep Ensemble for Rotorcraft Attitude Prediction
- URL: http://arxiv.org/abs/2306.17104v1
- Date: Thu, 29 Jun 2023 17:06:42 GMT
- Title: Deep Ensemble for Rotorcraft Attitude Prediction
- Authors: Hikmat Khan, Nidhal Carla Bouaynaya, Ghulam Rasool, Tyler Travis,
Lacey Thompson, Charles C. Johnson
- Abstract summary: The rotorcraft community has experienced a higher fatal accident rate than other aviation segments.
Recent advancements in artificial intelligence (AI) provide an opportunity to help design systems that can address rotorcraft safety challenges.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Historically, the rotorcraft community has experienced a higher fatal
accident rate than other aviation segments, including commercial and general
aviation. Recent advancements in artificial intelligence (AI) and the
application of these technologies in different areas of our lives are both
intriguing and encouraging. When developed appropriately for the aviation
domain, AI techniques provide an opportunity to help design systems that can
address rotorcraft safety challenges. Our recent work demonstrated that AI
algorithms could use video data from onboard cameras and correctly identify
different flight parameters from cockpit gauges, e.g., indicated airspeed.
These AI-based techniques provide a potentially cost-effective solution,
especially for small helicopter operators, to record the flight state
information and perform post-flight analyses. We also showed that carefully
designed and trained AI systems could accurately predict rotorcraft attitude
(i.e., pitch and yaw) from outside scenes (images or video data). Ordinary
off-the-shelf video cameras were installed inside the rotorcraft cockpit to
record the outside scene, including the horizon. The AI algorithm could
correctly identify rotorcraft attitude at an accuracy in the range of 80\%. In
this work, we combined five different onboard camera viewpoints to improve
attitude prediction accuracy to 94\%. In this paper, five onboard camera views
included the pilot windshield, co-pilot windshield, pilot Electronic Flight
Instrument System (EFIS) display, co-pilot EFIS display, and the attitude
indicator gauge. Using video data from each camera view, we trained various
convolutional neural networks (CNNs), which achieved prediction accuracy in the
range of 79\% % to 90\% %. We subsequently ensembled the learned knowledge from
all CNNs and achieved an ensembled accuracy of 93.3\%.
Related papers
- Commissioning An All-Sky Infrared Camera Array for Detection Of Airborne Objects [0.11703603440337004]
The Galileo Project is designing, building, and commissioning a multi-modal ground-based observatory to continuously monitor the sky.
One of the key instruments is an all-sky infrared camera array using eight uncooled long-wave infrared FLIR Boson 640 cameras.
We report acceptance rates (e.g. viewable airplanes that are recorded) and detection efficiencies (e.g. recorded airplanes which are successfully detected) for a variety of weather conditions.
A toy outlier search focused on large sinuosity of the 2-D reconstructed trajectories flags about 16% of trajectories as outliers.
arXiv Detail & Related papers (2024-11-12T17:31:51Z) - Automated extraction of 4D aircraft trajectories from video recordings [0.0]
The Bureau d'Enquetes et d'Analyses pour la S'ecurit'e de l'Aviation Civile (BEA) has to analyze accident videos from on-board or ground cameras involving all types of aircraft.
This study is to identify the applications of photogrammetry and to automate the extraction of 4D trajectories from these videos.
arXiv Detail & Related papers (2024-10-14T08:06:41Z) - Multiview Aerial Visual Recognition (MAVREC): Can Multi-view Improve
Aerial Visual Perception? [57.77643186237265]
We present Multiview Aerial Visual RECognition or MAVREC, a video dataset where we record synchronized scenes from different perspectives.
MAVREC consists of around 2.5 hours of industry-standard 2.7K resolution video sequences, more than 0.5 million frames, and 1.1 million annotated bounding boxes.
This makes MAVREC the largest ground and aerial-view dataset, and the fourth largest among all drone-based datasets.
arXiv Detail & Related papers (2023-12-07T18:59:14Z) - 3D Data Augmentation for Driving Scenes on Camera [50.41413053812315]
We propose a 3D data augmentation approach termed Drive-3DAug, aiming at augmenting the driving scenes on camera in the 3D space.
We first utilize Neural Radiance Field (NeRF) to reconstruct the 3D models of background and foreground objects.
Then, augmented driving scenes can be obtained by placing the 3D objects with adapted location and orientation at the pre-defined valid region of backgrounds.
arXiv Detail & Related papers (2023-03-18T05:51:05Z) - AZTR: Aerial Video Action Recognition with Auto Zoom and Temporal
Reasoning [63.628195002143734]
We propose a novel approach for aerial video action recognition.
Our method is designed for videos captured using UAVs and can run on edge or mobile devices.
We present a learning-based approach that uses customized auto zoom to automatically identify the human target and scale it appropriately.
arXiv Detail & Related papers (2023-03-02T21:24:19Z) - The eyes and hearts of UAV pilots: observations of physiological
responses in real-life scenarios [64.0476282000118]
In civil and military aviation, pilots can train themselves on realistic simulators to tune their reaction and reflexes.
This work aims to provide a solution to gather pilots behavior out in the field and help them increase their performance.
arXiv Detail & Related papers (2022-10-26T14:16:56Z) - TransVisDrone: Spatio-Temporal Transformer for Vision-based
Drone-to-Drone Detection in Aerial Videos [57.92385818430939]
Drone-to-drone detection using visual feed has crucial applications, such as detecting drone collisions, detecting drone attacks, or coordinating flight with other drones.
Existing methods are computationally costly, follow non-end-to-end optimization, and have complex multi-stage pipelines, making them less suitable for real-time deployment on edge devices.
We propose a simple yet effective framework, itTransVisDrone, that provides an end-to-end solution with higher computational efficiency.
arXiv Detail & Related papers (2022-10-16T03:05:13Z) - AirTrack: Onboard Deep Learning Framework for Long-Range Aircraft
Detection and Tracking [3.3773749296727535]
AirTrack is a real-time vision-only detect and tracking framework that respects the size, weight, and power constraints of sUAS systems.
We show that AirTrack outperforms state-of-the art baselines on the Amazon Airborne Object Tracking (AOT)
Empirical evaluations show that our system has a probability of track of more than 95% up to a range of 700m.
arXiv Detail & Related papers (2022-09-26T16:58:00Z) - Coupling Vision and Proprioception for Navigation of Legged Robots [65.59559699815512]
We exploit the complementary strengths of vision and proprioception to achieve point goal navigation in a legged robot.
We show superior performance compared to wheeled robot (LoCoBot) baselines.
We also show the real-world deployment of our system on a quadruped robot with onboard sensors and compute.
arXiv Detail & Related papers (2021-12-03T18:59:59Z) - EVPropNet: Detecting Drones By Finding Propellers For Mid-Air Landing
And Following [11.79762223888294]
Drone propellers are the fastest moving parts of an image and cannot be directly "seen" by a classical camera without severe motion blur.
We train a deep neural network called EVPropNet to detect propellers from the data of an event camera.
We present two applications of our network: (a) tracking and following an unmarked drone and (b) landing on a near-hover drone.
arXiv Detail & Related papers (2021-06-29T01:16:01Z) - Learn by Observation: Imitation Learning for Drone Patrolling from
Videos of A Human Navigator [22.06785798356346]
We propose to let the drone learn patrolling in the air by observing and imitating how a human navigator does it on the ground.
The observation process enables the automatic collection and annotation of data using inter-frame geometric consistency.
A newly designed neural network is trained based on the annotated data to predict appropriate directions and translations.
arXiv Detail & Related papers (2020-08-30T15:20:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.