The eyes and hearts of UAV pilots: observations of physiological
responses in real-life scenarios
- URL: http://arxiv.org/abs/2210.14910v1
- Date: Wed, 26 Oct 2022 14:16:56 GMT
- Title: The eyes and hearts of UAV pilots: observations of physiological
responses in real-life scenarios
- Authors: Alexandre Duval, Anita Paas, Abdalwhab Abdalwhab and David St-Onge
- Abstract summary: In civil and military aviation, pilots can train themselves on realistic simulators to tune their reaction and reflexes.
This work aims to provide a solution to gather pilots behavior out in the field and help them increase their performance.
- Score: 64.0476282000118
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The drone industry is diversifying and the number of pilots increases
rapidly. In this context, flight schools need adapted tools to train pilots,
most importantly with regard to their own awareness of their physiological and
cognitive limits. In civil and military aviation, pilots can train themselves
on realistic simulators to tune their reaction and reflexes, but also to gather
data on their piloting behavior and physiological states. It helps them to
improve their performances. Opposed to cockpit scenarios, drone teleoperation
is conducted outdoor in the field, thus with only limited potential from
desktop simulation training. This work aims to provide a solution to gather
pilots behavior out in the field and help them increase their performance. We
combined advance object detection from a frontal camera to gaze and heart-rate
variability measurements. We observed pilots and analyze their behavior over
three flight challenges. We believe this tool can support pilots both in their
training and in their regular flight tasks. A demonstration video is available
on https://www.youtube.com/watch?v=eePhjd2qNiI
Related papers
- UAVs and Birds: Enhancing Short-Range Navigation through Budgerigar Flight Studies [2.3884184860468136]
This study delves into the flight behaviors of Budgerigars (Melopsittacus undulatus) to gain insights into their flight trajectories and movements.
Using 3D reconstruction from stereo video camera recordings, we closely examine the velocity and acceleration patterns during three flight motion takeoff, flying and landing.
The research aims to bridge the gap between biological principles observed in birds and the application of these insights in developing more efficient and autonomous UAVs.
arXiv Detail & Related papers (2023-12-01T14:02:16Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - Autonomous Agent for Beyond Visual Range Air Combat: A Deep
Reinforcement Learning Approach [0.2578242050187029]
This work contributes to developing an agent based on deep reinforcement learning capable of acting in a beyond visual range (BVR) air combat simulation environment.
The paper presents an overview of building an agent representing a high-performance fighter aircraft that can learn and improve its role in BVR combat over time.
It also hopes to examine a real pilot's ability, using virtual simulation, to interact in the same environment with the trained agent and compare their performances.
arXiv Detail & Related papers (2023-04-19T13:54:37Z) - Towards Cooperative Flight Control Using Visual-Attention [61.99121057062421]
We propose a vision-based air-guardian system to enable parallel autonomy between a pilot and a control system.
Our attention-based air-guardian system can balance the trade-off between its level of involvement in the flight and the pilot's expertise and attention.
arXiv Detail & Related papers (2022-12-21T15:31:47Z) - Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone
Racing [52.50284630866713]
Existing systems often require hand-engineered components for state estimation, planning, and control.
This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
arXiv Detail & Related papers (2022-10-26T19:03:17Z) - Augmenting Flight Training with AI to Efficiently Train Pilots [0.0]
We propose an AI-based pilot trainer to help students learn how to fly aircraft.
An AI agent uses behavioral cloning to learn flying maneuvers from qualified flight instructors.
The system uses the agent's decisions to detect errors made by students and provide feedback to help students correct their errors.
arXiv Detail & Related papers (2022-10-13T02:35:24Z) - How might Driver Licensing and Vehicle Registration evolve if we adopt
Autonomous Cars and Digital Identification? [0.18275108630751835]
We contend a similar future may exist for driver training and licensure.
Pilot's license still attests to their ability to assume full control and complete the flight where it becomes necessary.
arXiv Detail & Related papers (2022-02-20T17:11:32Z) - Visual Attention Prediction Improves Performance of Autonomous Drone
Racing Agents [45.36060508554703]
Humans race drones faster than neural networks trained for end-to-end autonomous flight.
This work investigates whether neural networks capable of imitating human eye gaze behavior and attention can improve neural network performance.
arXiv Detail & Related papers (2022-01-07T18:07:51Z) - Self-Supervised Disentangled Representation Learning for Third-Person
Imitation Learning [45.62939275764248]
Third-person imitation learning (TPIL) is the concept of learning action policies by observing other agents in a third-person view.
In this paper, we present a TPIL approach for robot tasks with egomotion.
We propose our disentangled representation learning method to enable better state learning for TPIL.
arXiv Detail & Related papers (2021-08-02T17:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.