Vision-Based Autonomous Drone Control using Supervised Learning in
Simulation
- URL: http://arxiv.org/abs/2009.04298v1
- Date: Wed, 9 Sep 2020 13:45:41 GMT
- Title: Vision-Based Autonomous Drone Control using Supervised Learning in
Simulation
- Authors: Max Christl
- Abstract summary: We propose a vision-based control approach using Supervised Learning for autonomous navigation and landing of MAVs in indoor environments.
We trained a Convolutional Neural Network (CNN) that maps low resolution image and sensor input to high-level control commands.
Our approach requires shorter training times than similar Reinforcement Learning approaches and can potentially overcome the limitations of manual data collection faced by comparable Supervised Learning approaches.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Limited power and computational resources, absence of high-end sensor
equipment and GPS-denied environments are challenges faced by autonomous micro
areal vehicles (MAVs). We address these challenges in the context of autonomous
navigation and landing of MAVs in indoor environments and propose a
vision-based control approach using Supervised Learning. To achieve this, we
collected data samples in a simulation environment which were labelled
according to the optimal control command determined by a path planning
algorithm. Based on these data samples, we trained a Convolutional Neural
Network (CNN) that maps low resolution image and sensor input to high-level
control commands. We have observed promising results in both obstructed and
non-obstructed simulation environments, showing that our model is capable of
successfully navigating a MAV towards a landing platform. Our approach requires
shorter training times than similar Reinforcement Learning approaches and can
potentially overcome the limitations of manual data collection faced by
comparable Supervised Learning approaches.
Related papers
- Neural-based Control for CubeSat Docking Maneuvers [0.0]
This paper presents an innovative approach employing Artificial Neural Networks (ANN) trained through Reinforcement Learning (RL)
The proposed strategy is easily implementable onboard and offers fast adaptability and robustness to disturbances by learning control policies from experience.
Our findings highlight the efficacy of RL in assuring the adaptability and efficiency of spacecraft RVD, offering insights into future mission expectations.
arXiv Detail & Related papers (2024-10-16T16:05:46Z) - Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - Gaussian Splatting to Real World Flight Navigation Transfer with Liquid Networks [93.38375271826202]
We present a method to improve generalization and robustness to distribution shifts in sim-to-real visual quadrotor navigation tasks.
We first build a simulator by integrating Gaussian splatting with quadrotor flight dynamics, and then, train robust navigation policies using Liquid neural networks.
In this way, we obtain a full-stack imitation learning protocol that combines advances in 3D Gaussian splatting radiance field rendering, programming of expert demonstration training data, and the task understanding capabilities of Liquid networks.
arXiv Detail & Related papers (2024-06-21T13:48:37Z) - Automatic UAV-based Airport Pavement Inspection Using Mixed Real and
Virtual Scenarios [3.0874677990361246]
We propose a vision-based approach to automatically identify pavement distress using images captured by UAVs.
The proposed method is based on Deep Learning (DL) to segment defects in the image.
We demonstrate that the use of a mixed dataset composed of synthetic and real training images yields better results when testing the training models in real application scenarios.
arXiv Detail & Related papers (2024-01-11T16:30:07Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - Multitask Adaptation by Retrospective Exploration with Learned World
Models [77.34726150561087]
We propose a meta-learned addressing model called RAMa that provides training samples for the MBRL agent taken from task-agnostic storage.
The model is trained to maximize the expected agent's performance by selecting promising trajectories solving prior tasks from the storage.
arXiv Detail & Related papers (2021-10-25T20:02:57Z) - Multi-UAV Path Planning for Wireless Data Harvesting with Deep
Reinforcement Learning [18.266087952180733]
We propose a multi-agent reinforcement learning (MARL) approach that can adapt to profound changes in the scenario parameters defining the data harvesting mission.
We show that our proposed network architecture enables the agents to cooperate effectively by carefully dividing the data collection task among themselves.
arXiv Detail & Related papers (2020-10-23T14:59:30Z) - Risk-Averse MPC via Visual-Inertial Input and Recurrent Networks for
Online Collision Avoidance [95.86944752753564]
We propose an online path planning architecture that extends the model predictive control (MPC) formulation to consider future location uncertainties.
Our algorithm combines an object detection pipeline with a recurrent neural network (RNN) which infers the covariance of state estimates.
The robustness of our methods is validated on complex quadruped robot dynamics and can be generally applied to most robotic platforms.
arXiv Detail & Related papers (2020-07-28T07:34:30Z) - Control Design of Autonomous Drone Using Deep Learning Based Image
Understanding Techniques [1.0953917735844645]
This paper presents a new framework to use images as the inputs for the controller to have autonomous flight, considering the noisy indoor environment and uncertainties.
A new Proportional-Integral-Derivative-Accelerated (PIDA) control with a derivative filter is proposed to improve drone/quadcopter flight stability within a noisy environment.
arXiv Detail & Related papers (2020-04-27T15:50:04Z) - Approximate Inverse Reinforcement Learning from Vision-based Imitation
Learning [34.5366377122507]
We present a method for obtaining an implicit objective function for vision-based navigation.
The proposed methodology relies on Imitation Learning, Model Predictive Control (MPC), and an interpretation technique used in Deep Neural Networks.
arXiv Detail & Related papers (2020-04-17T03:36:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.