Optimal Solving of Constrained Path-Planning Problems with Graph
Convolutional Networks and Optimized Tree Search
- URL: http://arxiv.org/abs/2108.01036v1
- Date: Mon, 2 Aug 2021 16:53:21 GMT
- Title: Optimal Solving of Constrained Path-Planning Problems with Graph
Convolutional Networks and Optimized Tree Search
- Authors: Kevin Osanlou, Andrei Bursuc, Christophe Guettier, Tristan Cazenave
and Eric Jacopin
- Abstract summary: We propose a hybrid solving planner that combines machine learning models and an optimal solver.
We conduct experiments on realistic scenarios and show that GCN support enables substantial speedup and smoother scaling to harder problems.
- Score: 12.457788665461312
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Learning-based methods are growing prominence for planning purposes. However,
there are very few approaches for learning-assisted constrained path-planning
on graphs, while there are multiple downstream practical applications. This is
the case for constrained path-planning for Autonomous Unmanned Ground Vehicles
(AUGV), typically deployed in disaster relief or search and rescue
applications. In off-road environments, the AUGV must dynamically optimize a
source-destination path under various operational constraints, out of which
several are difficult to predict in advance and need to be addressed on-line.
We propose a hybrid solving planner that combines machine learning models and
an optimal solver. More specifically, a graph convolutional network (GCN) is
used to assist a branch and bound (B&B) algorithm in handling the constraints.
We conduct experiments on realistic scenarios and show that GCN support enables
substantial speedup and smoother scaling to harder problems.
Related papers
- Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - Cooperative Behavioral Planning for Automated Driving using Graph Neural
Networks [0.5801044612920815]
This work proposes to leverage machine learning algorithms to optimize traffic flow at urban intersections by jointly planning for multiple vehicles.
Learning-based behavior planning poses several challenges, demanding for a suited input and output representation as well as large amounts of ground-truth data.
We train and evaluate the proposed method in an open-source simulation environment for decision making in automated driving.
arXiv Detail & Related papers (2022-02-23T09:36:15Z) - Generating Useful Accident-Prone Driving Scenarios via a Learned Traffic
Prior [135.78858513845233]
STRIVE is a method to automatically generate challenging scenarios that cause a given planner to produce undesirable behavior, like collisions.
To maintain scenario plausibility, the key idea is to leverage a learned model of traffic motion in the form of a graph-based conditional VAE.
A subsequent optimization is used to find a "solution" to the scenario, ensuring it is useful to improve the given planner.
arXiv Detail & Related papers (2021-12-09T18:03:27Z) - Neural Motion Planning for Autonomous Parking [6.1805402105389895]
This paper presents a hybrid motion planning strategy that combines a deep generative network with a conventional motion planning method.
The proposed method effectively learns the representations of a given state, and shows improvement in terms of algorithm performance.
arXiv Detail & Related papers (2021-11-12T14:29:38Z) - Learning-based Preference Prediction for Constrained Multi-Criteria
Path-Planning [12.457788665461312]
Constrained path-planning for Autonomous Ground Vehicles (AGV) is one such application.
We leverage knowledge acquired through offline simulations by training a neural network model to predict the uncertain criterion.
We integrate this model inside a path-planner which can solve problems online.
arXiv Detail & Related papers (2021-08-02T17:13:45Z) - Integrated Decision and Control: Towards Interpretable and Efficient
Driving Intelligence [13.589285628074542]
We present an interpretable and efficient decision and control framework for automated vehicles.
It decomposes the driving task into multi-path planning and optimal tracking that are structured hierarchically.
Results show that our method has better online computing efficiency and driving performance including traffic efficiency and safety.
arXiv Detail & Related papers (2021-03-18T14:43:31Z) - Deep Policy Dynamic Programming for Vehicle Routing Problems [89.96386273895985]
We propose Deep Policy Dynamic Programming (D PDP) to combine the strengths of learned neurals with those of dynamic programming algorithms.
D PDP prioritizes and restricts the DP state space using a policy derived from a deep neural network, which is trained to predict edges from example solutions.
We evaluate our framework on the travelling salesman problem (TSP) and the vehicle routing problem (VRP) and show that the neural policy improves the performance of (restricted) DP algorithms.
arXiv Detail & Related papers (2021-02-23T15:33:57Z) - Distributed Multi-agent Meta Learning for Trajectory Design in Wireless
Drone Networks [151.27147513363502]
This paper studies the problem of the trajectory design for a group of energyconstrained drones operating in dynamic wireless network environments.
A value based reinforcement learning (VDRL) solution and a metatraining mechanism is proposed.
arXiv Detail & Related papers (2020-12-06T01:30:12Z) - Trajectory Planning for Autonomous Vehicles Using Hierarchical
Reinforcement Learning [21.500697097095408]
Planning safe trajectories under uncertain and dynamic conditions makes the autonomous driving problem significantly complex.
Current sampling-based methods such as Rapidly Exploring Random Trees (RRTs) are not ideal for this problem because of the high computational cost.
We propose a Hierarchical Reinforcement Learning structure combined with a Proportional-Integral-Derivative (PID) controller for trajectory planning.
arXiv Detail & Related papers (2020-11-09T20:49:54Z) - Meta-Reinforcement Learning for Trajectory Design in Wireless UAV
Networks [151.65541208130995]
A drone base station (DBS) is dispatched to provide uplink connectivity to ground users whose demand is dynamic and unpredictable.
In this case, the DBS's trajectory must be adaptively adjusted to satisfy the dynamic user access requests.
A meta-learning algorithm is proposed in order to adapt the DBS's trajectory when it encounters novel environments.
arXiv Detail & Related papers (2020-05-25T20:43:59Z) - Optimizing Wireless Systems Using Unsupervised and
Reinforced-Unsupervised Deep Learning [96.01176486957226]
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems.
In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems.
arXiv Detail & Related papers (2020-01-03T11:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.