Learning-'N-Flying: A Learning-based, Decentralized Mission Aware UAS
Collision Avoidance Scheme
- URL: http://arxiv.org/abs/2101.10404v1
- Date: Mon, 25 Jan 2021 20:38:17 GMT
- Title: Learning-'N-Flying: A Learning-based, Decentralized Mission Aware UAS
Collision Avoidance Scheme
- Authors: Al\"ena Rodionova (1), Yash Vardhan Pant (2), Connor Kurtz (3), Kuk
Jang (1), Houssam Abbas (3), Rahul Mangharam (1) ((1) University of
Pennsylvania, (2) University of California Berkeley, (3) Oregon State
University)
- Abstract summary: Learning-'N-Flying (LNF) is a multi-UAS Collision Avoidance (CA) framework.
It is decentralized, works on-the-fly and allows autonomous UAS managed by different operators to safely carry out complex missions.
We show that our method can run online (computation time in the order of milliseconds), and under certain assumptions has failure rates of less than 1% in the worst-case.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Urban Air Mobility, the scenario where hundreds of manned and Unmanned
Aircraft System (UAS) carry out a wide variety of missions (e.g. moving humans
and goods within the city), is gaining acceptance as a transportation solution
of the future. One of the key requirements for this to happen is safely
managing the air traffic in these urban airspaces. Due to the expected density
of the airspace, this requires fast autonomous solutions that can be deployed
online. We propose Learning-'N-Flying (LNF) a multi-UAS Collision Avoidance
(CA) framework. It is decentralized, works on-the-fly and allows autonomous UAS
managed by different operators to safely carry out complex missions,
represented using Signal Temporal Logic, in a shared airspace. We initially
formulate the problem of predictive collision avoidance for two UAS as a
mixed-integer linear program, and show that it is intractable to solve online.
Instead, we first develop Learning-to-Fly (L2F) by combining: a) learning-based
decision-making, and b) decentralized convex optimization-based control. LNF
extends L2F to cases where there are more than two UAS on a collision path.
Through extensive simulations, we show that our method can run online
(computation time in the order of milliseconds), and under certain assumptions
has failure rates of less than 1% in the worst-case, improving to near 0% in
more relaxed operations. We show the applicability of our scheme to a wide
variety of settings through multiple case studies.
Related papers
- Multi-UAV Pursuit-Evasion with Online Planning in Unknown Environments by Deep Reinforcement Learning [16.761470423715338]
Multi-UAV pursuit-evasion poses a key challenge for UAV swarm intelligence.
We introduce an evader prediction-enhanced network to tackle partial observability in cooperative strategy learning.
We derive a feasible policy via a two-stage reward refinement and deploy the policy on real quadrotors in a zero-shot manner.
arXiv Detail & Related papers (2024-09-24T08:40:04Z) - One-Shot Safety Alignment for Large Language Models via Optimal Dualization [64.52223677468861]
This paper presents a dualization perspective that reduces constrained alignment to an equivalent unconstrained alignment problem.
We do so by pre-optimizing a smooth and convex dual function that has a closed form.
Our strategy leads to two practical algorithms in model-based and preference-based scenarios.
arXiv Detail & Related papers (2024-05-29T22:12:52Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - Reinforcement Learning-Based Air Traffic Deconfliction [7.782300855058585]
This work focuses on automating the horizontal separation of two aircraft and presents the obstacle avoidance problem as a 2D surrogate optimization task.
Using Reinforcement Learning (RL), we optimize the avoidance policy and model the dynamics, interactions, and decision-making.
The proposed system generates a quick and achievable avoidance trajectory that satisfies the safety requirements.
arXiv Detail & Related papers (2023-01-05T00:37:20Z) - NeurIPS 2022 Competition: Driving SMARTS [60.948652154552136]
Driving SMARTS is a regular competition designed to tackle problems caused by the distribution shift in dynamic interaction contexts.
The proposed competition supports methodologically diverse solutions, such as reinforcement learning (RL) and offline learning methods.
arXiv Detail & Related papers (2022-11-14T17:10:53Z) - Obstacle Avoidance for UAS in Continuous Action Space Using Deep
Reinforcement Learning [9.891207216312937]
Obstacle avoidance for small unmanned aircraft is vital for the safety of future urban air mobility.
We propose a deep reinforcement learning algorithm based on Proximal Policy Optimization (PPO) to guide autonomous UAS to their destinations.
Results show that the proposed model can provide accurate and robust guidance and resolve conflict with a success rate of over 99%.
arXiv Detail & Related papers (2021-11-13T04:44:53Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Learning-to-Fly: Learning-based Collision Avoidance for Scalable Urban
Air Mobility [2.117421588033177]
We present Learning-to-Fly (L2F), a decentralized on-demand airborne collision avoidance framework for multiple UAS.
L2F is a two-stage collision avoidance method that consists of: 1) a learning-based decision-making scheme and 2) a distributed, linear programming-based UAS control algorithm.
We show the real-time applicability of our method which is $approx!6000times$ faster than the MILP approach and can resolve $100%$ of collisions when there is ample room to maneuver.
arXiv Detail & Related papers (2020-06-23T18:46:31Z) - Congestion-aware Evacuation Routing using Augmented Reality Devices [96.68280427555808]
We present a congestion-aware routing solution for indoor evacuation, which produces real-time individual-customized evacuation routes among multiple destinations.
A population density map, obtained on-the-fly by aggregating locations of evacuees from user-end Augmented Reality (AR) devices, is used to model the congestion distribution inside a building.
arXiv Detail & Related papers (2020-04-25T22:54:35Z) - Using Deep Reinforcement Learning Methods for Autonomous Vessels in 2D
Environments [11.657524999491029]
In this work, we used deep reinforcement learning combining Q-learning with a neural representation to avoid instability.
Our methodology uses deep q-learning and combines it with a rolling wave planning approach on agile methodology.
Experimental results show that the proposed method enhanced the performance of VVN by 55.31 on average for long-distance missions.
arXiv Detail & Related papers (2020-03-23T12:58:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.