Interpretable UAV Collision Avoidance using Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2105.12254v1
- Date: Tue, 25 May 2021 23:21:54 GMT
- Title: Interpretable UAV Collision Avoidance using Deep Reinforcement Learning
- Authors: Deepak-George Thomas, Daniil Olshanskyi, Karter Krueger, Ali Jannesari
- Abstract summary: We present autonomous UAV flight using Deep Reinforcement Learning augmented with Self-Attention Models.
We have tested our algorithm under different weather and environments and found it to be robust compared to conventional Deep Reinforcement Learning algorithms.
- Score: 1.2693545159861856
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The major components of any successful autonomous flight system are task
completion and collision avoidance. Most deep learning algorithms are
successful while executing these aspects under the environment and conditions
in which they have been trained. However, they fail when subjected to novel
environments. In this paper we present autonomous UAV flight using Deep
Reinforcement Learning augmented with Self-Attention Models that can
effectively reason when subjected to varying inputs. In addition to their
reasoning ability, they also are interpretable which enables it to be used
under real-world conditions. We have tested our algorithm under different
weather and environments and found it to be robust compared to conventional
Deep Reinforcement Learning algorithms.
Related papers
- Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - Using Collision Momentum in Deep Reinforcement Learning Based
Adversarial Pedestrian Modeling [0.0]
We propose a reinforcement learning algorithm that specifically targets collisions and better uncovers unique failure modes of automated vehicle controllers.
Our algorithm is efficient and generates more severe collisions, allowing for the identification and correction of weaknesses in autonomous driving algorithms in complex and varied scenarios.
arXiv Detail & Related papers (2023-06-13T03:38:05Z) - Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone
Racing [52.50284630866713]
Existing systems often require hand-engineered components for state estimation, planning, and control.
This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
arXiv Detail & Related papers (2022-10-26T19:03:17Z) - Active Perception Applied To Unmanned Aerial Vehicles Through Deep
Reinforcement Learning [0.5161531917413708]
This work aims to contribute to the active perception of UAVs by tackling the problem of tracking and recognizing water surface structures.
We show that our system with classical image processing techniques and a simple Deep Reinforcement Learning (Deep-RL) agent is capable of perceiving the environment and dealing with uncertainties.
arXiv Detail & Related papers (2022-09-13T22:51:34Z) - A Deep Reinforcement Learning Strategy for UAV Autonomous Landing on a
Platform [0.0]
We proposed a reinforcement learning framework based on Gazebo that is a kind of physical simulation platform (ROS-RL)
We used three continuous action space reinforcement learning algorithms in the framework to dealing with the problem of autonomous landing of drones.
arXiv Detail & Related papers (2022-09-07T06:33:57Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Solving reward-collecting problems with UAVs: a comparison of online
optimization and Q-learning [2.4251007104039006]
We study the problem of identifying a short path from a designated start to a goal, while collecting all rewards and avoiding adversaries that move randomly on the grid.
We present a comparison of three methods to solve this problem: namely we implement a Deep Q-Learning model, an $varepsilon$-greedy tabular Q-Learning model, and an online optimization framework.
Our experiments, designed using simple grid-world environments with random adversaries, showcase how these approaches work and compare them in terms of performance, accuracy, and computational time.
arXiv Detail & Related papers (2021-11-30T22:27:24Z) - XAI-N: Sensor-based Robot Navigation using Expert Policies and Decision
Trees [55.9643422180256]
We present a novel sensor-based learning navigation algorithm to compute a collision-free trajectory for a robot in dense and dynamic environments.
Our approach uses deep reinforcement learning-based expert policy that is trained using a sim2real paradigm.
We highlight the benefits of our algorithm in simulated environments and navigating a Clearpath Jackal robot among moving pedestrians.
arXiv Detail & Related papers (2021-04-22T01:33:10Z) - Using Deep Reinforcement Learning Methods for Autonomous Vessels in 2D
Environments [11.657524999491029]
In this work, we used deep reinforcement learning combining Q-learning with a neural representation to avoid instability.
Our methodology uses deep q-learning and combines it with a rolling wave planning approach on agile methodology.
Experimental results show that the proposed method enhanced the performance of VVN by 55.31 on average for long-distance missions.
arXiv Detail & Related papers (2020-03-23T12:58:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.