Safe Interval Path Planning With Kinodynamic Constraints
- URL: http://arxiv.org/abs/2302.00776v1
- Date: Wed, 1 Feb 2023 22:15:58 GMT
- Title: Safe Interval Path Planning With Kinodynamic Constraints
- Authors: Zain Alabedeen Ali and Konstantin Yakovlev
- Abstract summary: Original SIPP algorithm relies on the assumption that the agent is able to stop instantaneously.
We introduce a novel variant of SIPP that is provably complete and optimal for planning with acceleration/deceleration.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Safe Interval Path Planning (SIPP) is a powerful algorithm for solving
single-agent pathfinding problem when the agent is confined to a graph and
certain vertices/edges of this graph are blocked at certain time intervals due
to dynamic obstacles that populate the environment. Original SIPP algorithm
relies on the assumption that the agent is able to stop instantaneously.
However, this assumption often does not hold in practice, e.g. a mobile robot
moving with a cruising speed is not able to stop immediately but rather
requires gradual deceleration to a full stop that takes time. In other words,
the robot is subject to kinodynamic constraints. Unfortunately, as we show in
this work, in such a case original SIPP is incomplete. To this end, we
introduce a novel variant of SIPP that is provably complete and optimal for
planning with acceleration/deceleration. In the experimental evaluation we show
that the key property of the original SIPP still holds for the modified version
-- it performs much less expansions compared to A* and, as a result, is notably
faster.
Related papers
- Rethinking PGD Attack: Is Sign Function Necessary? [131.6894310945647]
We present a theoretical analysis of how such sign-based update algorithm influences step-wise attack performance.
We propose a new raw gradient descent (RGD) algorithm that eliminates the use of sign.
The effectiveness of the proposed RGD algorithm has been demonstrated extensively in experiments.
arXiv Detail & Related papers (2023-12-03T02:26:58Z) - Efficient Real-time Path Planning with Self-evolving Particle Swarm
Optimization in Dynamic Scenarios [6.951981832970596]
Operation Form (TOF) converts particle-wise manipulations to tensor operations.
Self-Evolving Particle Swarm Optimization (SEPSO) is developed.
SEPSO is capable of generating superior paths with considerably better real-time performance.
arXiv Detail & Related papers (2023-08-20T05:31:48Z) - Non-stationary Online Convex Optimization with Arbitrary Delays [50.46856739179311]
This paper investigates the delayed online convex optimization (OCO) in non-stationary environments.
We first propose a simple algorithm, namely DOGD, which performs a gradient descent step for each delayed gradient according to their arrival order.
We develop an improved algorithm, which reduces those dynamic regret bounds achieved by DOGD to $O(sqrtbardT(P_T+1))$.
arXiv Detail & Related papers (2023-05-20T07:54:07Z) - Faster Approximate Dynamic Programming by Freezing Slow States [5.6928413790238865]
We consider infinite horizon Markov decision processes (MDPs) with fast-slow structure.
Such structure is common in real-world problems where sequential decisions need to be made at high frequencies.
We propose an approximate dynamic programming framework based on the idea of "freezing" the slow states.
arXiv Detail & Related papers (2023-01-03T01:35:24Z) - Entropic Neural Optimal Transport via Diffusion Processes [105.34822201378763]
We propose a novel neural algorithm for the fundamental problem of computing the entropic optimal transport (EOT) plan between continuous probability distributions.
Our algorithm is based on the saddle point reformulation of the dynamic version of EOT which is known as the Schr"odinger Bridge problem.
In contrast to the prior methods for large-scale EOT, our algorithm is end-to-end and consists of a single learning step.
arXiv Detail & Related papers (2022-11-02T14:35:13Z) - Revisiting and Advancing Fast Adversarial Training Through The Lens of
Bi-Level Optimization [60.72410937614299]
We propose a new tractable bi-level optimization problem, design and analyze a new set of algorithms termed Bi-level AT (FAST-BAT)
FAST-BAT is capable of defending sign-based projected descent (PGD) attacks without calling any gradient sign method and explicit robust regularization.
arXiv Detail & Related papers (2021-12-23T06:25:36Z) - Prioritized SIPP for Multi-Agent Path Finding With Kinematic Constraints [0.0]
Multi-Agent Path Finding (MAPF) is a long-standing problem in Robotics and Artificial Intelligence.
We present a method that mitigates this issue to a certain extent.
arXiv Detail & Related papers (2021-08-11T10:42:11Z) - ADOM: Accelerated Decentralized Optimization Method for Time-Varying
Networks [124.33353902111939]
We propose ADOM - an accelerated method for smooth and strongly convex decentralized optimization over time-varying networks.
Up to a constant factor, which depends on the network structure only, its communication is the same as that of accelerated Nesterov method.
arXiv Detail & Related papers (2021-02-18T09:37:20Z) - Parameter-free Locally Accelerated Conditional Gradients [91.19349793915615]
We introduce a novel,.
Free Locally accelerated CG (PF-LaCG) algorithm, for which we provide rigorous convergence guarantees.
Our theoretical results are complemented by numerical experiments, which demonstrate local acceleration and showcase the practical improvements of PF-LaCG over non-accelerated algorithms.
arXiv Detail & Related papers (2021-02-12T22:50:01Z) - Escaping Saddle Points Efficiently with Occupation-Time-Adapted
Perturbations [12.251606057991237]
Motivated by the super-diffusivity of self-repelling random walk, this paper develops a new perturbation mechanism for optimization algorithms.
Two new algorithms are proposed: perturbed gradient descent adapted to occupation time and its accelerated version.
arXiv Detail & Related papers (2020-05-09T19:58:23Z) - Gradient descent with momentum --- to accelerate or to super-accelerate? [0.0]
We show that the algorithm can be improved by extending this acceleration'
Super-acceleration is also easy to incorporate into adaptive algorithms like RMSProp or Adam.
arXiv Detail & Related papers (2020-01-17T18:50:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.