Learning-Initialized Trajectory Planning in Unknown Environments
- URL: http://arxiv.org/abs/2309.10683v1
- Date: Tue, 19 Sep 2023 15:07:26 GMT
- Title: Learning-Initialized Trajectory Planning in Unknown Environments
- Authors: Yicheng Chen, Jinjie Li, Wenyuan Qin, Yongzhao Hua, Xiwang Dong,
Qingdong Li
- Abstract summary: Planning for autonomous flight in unknown environments requires precise planning for both the spatial and temporal trajectories.
We introduce a novel approach that guides optimization using a Neural-d Trajectory Planner.
We propose a framework that supports robust online replanning with tolerance to planning latency.
- Score: 4.2960463890487555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous flight in unknown environments requires precise planning for both
the spatial and temporal profiles of trajectories, which generally involves
nonconvex optimization, leading to high time costs and susceptibility to local
optima. To address these limitations, we introduce the Learning-Initialized
Trajectory Planner (LIT-Planner), a novel approach that guides optimization
using a Neural Network (NN) Planner to provide initial values. We first
leverage the spatial-temporal optimization with batch sampling to generate
training cases, aiming to capture multimodality in trajectories. Based on these
data, the NN-Planner maps visual and inertial observations to trajectory
parameters for handling unknown environments. The network outputs are then
optimized to enhance both reliability and explainability, ensuring robust
performance. Furthermore, we propose a framework that supports robust online
replanning with tolerance to planning latency. Comprehensive simulations
validate the LIT-Planner's time efficiency without compromising trajectory
quality compared to optimization-based methods. Real-world experiments further
demonstrate its practical suitability for autonomous drone navigation.
Related papers
- Towards Robust Spacecraft Trajectory Optimization via Transformers [17.073280827888226]
Future multi-spacecraft missions require robust autonomous optimization capabilities to ensure safe and efficient rendezvous operations.
To mitigate this burden, introduced generative Transformer model to provide robust optimal initial guesses.
This work extends capabilities of ART to address robustconstrained optimal control problems.
arXiv Detail & Related papers (2024-10-08T00:58:42Z) - Energy-Efficient Federated Edge Learning with Streaming Data: A Lyapunov Optimization Approach [34.00679567444125]
We develop a dynamic scheduling and resource allocation algorithm to address the inherent randomness in data arrivals and resource availability under long-term energy constraints.
Our proposed algorithm makes adaptive decisions on device scheduling, computational capacity adjustment, and allocation of bandwidth and transmit power in every round.
The effectiveness of our scheme is verified through simulation results, demonstrating improved learning performance and energy efficiency as compared to baseline schemes.
arXiv Detail & Related papers (2024-05-20T14:13:22Z) - Continual Model-based Reinforcement Learning for Data Efficient Wireless Network Optimisation [73.04087903322237]
We formulate throughput optimisation as Continual Reinforcement Learning of control policies.
Simulation results suggest that the proposed system is able to shorten the end-to-end deployment lead-time by two-fold.
arXiv Detail & Related papers (2024-04-30T11:23:31Z) - Dynamic Scheduling for Federated Edge Learning with Streaming Data [56.91063444859008]
We consider a Federated Edge Learning (FEEL) system where training data are randomly generated over time at a set of distributed edge devices with long-term energy constraints.
Due to limited communication resources and latency requirements, only a subset of devices is scheduled for participating in the local training process in every iteration.
arXiv Detail & Related papers (2023-05-02T07:41:16Z) - DDPEN: Trajectory Optimisation With Sub Goal Generation Model [70.36888514074022]
In this paper, we produce a novel Differential Dynamic Programming with Escape Network (DDPEN)
We propose to utilize a deep model that takes as an input map of the environment in the form of a costmap together with the desired position.
The model produces possible future directions that will lead to the goal, avoiding local minima which is possible to run in real time conditions.
arXiv Detail & Related papers (2023-01-18T11:02:06Z) - On the Effective Usage of Priors in RSS-based Localization [56.68864078417909]
We propose a Received Signal Strength (RSS) fingerprint and convolutional neural network-based algorithm, LocUNet.
In this paper, we study the localization problem in dense urban settings.
We first recognize LocUNet's ability to learn the underlying prior distribution of the Rx position or Rx and transmitter (Tx) association preferences from the training data, and attribute its high performance to these.
arXiv Detail & Related papers (2022-11-28T00:31:02Z) - Deep Learning Aided Packet Routing in Aeronautical Ad-Hoc Networks
Relying on Real Flight Data: From Single-Objective to Near-Pareto
Multi-Objective Optimization [79.96177511319713]
We invoke deep learning (DL) to assist routing in aeronautical ad-hoc networks (AANETs)
A deep neural network (DNN) is conceived for mapping the local geographic information observed by the forwarding node into the information required for determining the optimal next hop.
We extend the DL-aided routing algorithm to a multi-objective scenario, where we aim for simultaneously minimizing the delay, maximizing the path capacity, and maximizing the path lifetime.
arXiv Detail & Related papers (2021-10-28T14:18:22Z) - Adaptive Selection of Informative Path Planning Strategies via
Reinforcement Learning [6.015556590955814]
"Local planning" approaches adopt various spatial ranges within which next sampling locations are prioritized to investigate their effects on the prediction performance as well as incurred travel distance.
Experiments on use cases of temperature monitoring robots demonstrate that the dynamic mixtures of planners can not only generate sophisticated, informative plans but also ensure significantly reduced distances at no cost of prediction reliability.
arXiv Detail & Related papers (2021-08-14T21:32:33Z) - Autonomous Drone Racing with Deep Reinforcement Learning [39.757652701917166]
In many robotic tasks, such as drone racing, the goal is to travel through a set of waypoints as fast as possible.
A key challenge is planning the minimum-time trajectory, which is typically solved by assuming perfect knowledge of the waypoints to pass in advance.
In this work, a new approach to minimum-time trajectory generation for quadrotors is presented.
arXiv Detail & Related papers (2021-03-15T18:05:49Z) - Trajectory Planning for Autonomous Vehicles Using Hierarchical
Reinforcement Learning [21.500697097095408]
Planning safe trajectories under uncertain and dynamic conditions makes the autonomous driving problem significantly complex.
Current sampling-based methods such as Rapidly Exploring Random Trees (RRTs) are not ideal for this problem because of the high computational cost.
We propose a Hierarchical Reinforcement Learning structure combined with a Proportional-Integral-Derivative (PID) controller for trajectory planning.
arXiv Detail & Related papers (2020-11-09T20:49:54Z) - Chance-Constrained Trajectory Optimization for Safe Exploration and
Learning of Nonlinear Systems [81.7983463275447]
Learning-based control algorithms require data collection with abundant supervision for training.
We present a new approach for optimal motion planning with safe exploration that integrates chance-constrained optimal control with dynamics learning and feedback control.
arXiv Detail & Related papers (2020-05-09T05:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.