Revisiting Bounded-Suboptimal Safe Interval Path Planning
- URL: http://arxiv.org/abs/2006.01195v1
- Date: Mon, 1 Jun 2020 18:42:52 GMT
- Title: Revisiting Bounded-Suboptimal Safe Interval Path Planning
- Authors: Konstantin Yakovlev, Anton Andreychuk, Roni Stern
- Abstract summary: Safe-interval path planning (SIPP) is a powerful algorithm for finding a path in the presence of dynamic obstacles.
In many practical applications of SIPP such as path planning for robots, one would like to trade-off optimality for shorter planning time.
- Score: 16.24691505268453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Safe-interval path planning (SIPP) is a powerful algorithm for finding a path
in the presence of dynamic obstacles. SIPP returns provably optimal solutions.
However, in many practical applications of SIPP such as path planning for
robots, one would like to trade-off optimality for shorter planning time. In
this paper we explore different ways to build a bounded-suboptimal SIPP and
discuss their pros and cons. We compare the different bounded-suboptimal
versions of SIPP experimentally. While there is no universal winner, the
results provide insights into when each method should be used.
Related papers
- Research on reinforcement learning based warehouse robot navigation algorithm in complex warehouse layout [13.945240113332352]
This paper proposes a new method of Proximal Policy Optimization (PPO) and Dijkstra's algorithm, Proximal policy-Dijkstra (PP-D)
PP-D method realizes efficient strategy learning and real-time decision making through PPO, and uses Dijkstra algorithm to plan the global optimal path.
arXiv Detail & Related papers (2024-11-09T09:44:03Z) - Artificial Intelligence Based Navigation in Quasi Structured Environment [0.0]
This paper examines the working, applications, complexity factors, advantages & disadvantages of Floyd- Warshall, Bellman-Ford, Johnson, Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), & Grey Wolf (GWO) algorithms.
The proposed algorithm showed better results with less time complexity, when applied on randomly structured points within a boundary called quasi-structured points.
arXiv Detail & Related papers (2024-07-08T06:42:02Z) - Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Path Planning in a dynamic environment using Spherical Particle Swarm Optimization [0.0]
A Dynamic Path Planner (DPP) for UAV using the Spherical Vector-based Particle Swarm optimisation technique is proposed in this study.
The path is constructed as a set of way-points that stands as re-planning checkpoints. Path length, Safety, Attitude and Path Smoothness are all taken into account upon deciding how an optimal path should be.
Four test scenarios are carried out using real digital elevation models. Each test gives different priorities to path length and safety, in order to show how well the SPSO-DPP is capable of generating a safe yet efficient path segments.
arXiv Detail & Related papers (2024-03-19T13:56:34Z) - Surpassing legacy approaches to PWR core reload optimization with single-objective Reinforcement learning [0.0]
We have developed methods based on Deep Reinforcement Learning (DRL) for both single- and multi-objective optimization.
In this paper, we demonstrate the advantage of our RL-based approach, specifically using Proximal Policy Optimization (PPO)
PPO adapts its search capability via a policy with learnable weights, allowing it to function as both a global and local search method.
arXiv Detail & Related papers (2024-02-16T19:35:58Z) - Towards Efficient Exact Optimization of Language Model Alignment [93.39181634597877]
Direct preference optimization (DPO) was proposed to directly optimize the policy from preference data.
We show that DPO derived based on the optimal solution of problem leads to a compromised mean-seeking approximation of the optimal solution in practice.
We propose efficient exact optimization (EXO) of the alignment objective.
arXiv Detail & Related papers (2024-02-01T18:51:54Z) - Understanding the Effect of Stochasticity in Policy Optimization [86.7574122154668]
We show that the preferability of optimization methods depends critically on whether exact gradients are used.
Second, to explain these findings we introduce the concept of committal rate for policy optimization.
Third, we show that in the absence of external oracle information, there is an inherent trade-off between exploiting geometry to accelerate convergence versus achieving optimality almost surely.
arXiv Detail & Related papers (2021-10-29T06:35:44Z) - Learning Space Partitions for Path Planning [54.475949279050596]
PlaLaM outperforms existing path planning methods in 2D navigation tasks, especially in the presence of difficult-to-escape local optima.
These gains transfer to highly multimodal real-world tasks, where we outperform strong baselines in compiler phase ordering by up to 245% and in molecular design by up to 0.4 on properties on a 0-1 scale.
arXiv Detail & Related papers (2021-06-19T18:06:11Z) - Sparsification for Fast Optimal Multi-Robot Path Planning in Lazy
Compilation Schemes [7.766921168069532]
Path planning for multiple robots (MRPP) represents a task of finding non-colliding paths for robots through which they can navigate from their initial positions to specified goal positions.
In this paper, we enhance existing SAT-based algorithm for MRPP via spartification of the set of candidate paths for each robot from which target Boolean encoding is derived.
arXiv Detail & Related papers (2021-03-08T00:57:42Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z) - Provably Efficient Exploration in Policy Optimization [117.09887790160406]
This paper proposes an Optimistic variant of the Proximal Policy Optimization algorithm (OPPO)
OPPO achieves $tildeO(sqrtd2 H3 T )$ regret.
To the best of our knowledge, OPPO is the first provably efficient policy optimization algorithm that explores.
arXiv Detail & Related papers (2019-12-12T08:40:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.