T$^{\star}$-Lite: A Fast Time-Risk Optimal Motion Planning Algorithm for
Multi-Speed Autonomous Vehicles
- URL: http://arxiv.org/abs/2008.13048v1
- Date: Sat, 29 Aug 2020 20:25:43 GMT
- Title: T$^{\star}$-Lite: A Fast Time-Risk Optimal Motion Planning Algorithm for
Multi-Speed Autonomous Vehicles
- Authors: James P. Wilson and Zongyuan Shen and Shalabh Gupta and Thomas A.
Wettergren
- Abstract summary: T$star$-Lite is a significantly faster version of the previously developed Tstar$ algorithm.
T$star$-Lite is developed using the RRT$star$ motion planner.
- Score: 0.860255319568951
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we develop a new algorithm, called T$^{\star}$-Lite, that
enables fast time-risk optimal motion planning for variable-speed autonomous
vehicles. The T$^{\star}$-Lite algorithm is a significantly faster version of
the previously developed T$^{\star}$ algorithm. T$^{\star}$-Lite uses the novel
time-risk cost function of T$^{\star}$; however, instead of a grid-based
approach, it uses an asymptotically optimal sampling-based motion planner.
Furthermore, it utilizes the recently developed Generalized Multi-speed Dubins
Motion-model (GMDM) for sample-to-sample kinodynamic motion planning. The
sample-based approach and GMDM significantly reduce the computational burden of
T$^{\star}$ while providing reasonable solution quality. The sample points are
drawn from a four-dimensional configuration space consisting of two position
coordinates plus vehicle heading and speed. Specifically, T$^{\star}$-Lite
enables the motion planner to select the vehicle speed and direction based on
its proximity to the obstacle to generate faster and safer paths. In this
paper, T$^{\star}$-Lite is developed using the RRT$^{\star}$ motion planner,
but adaptation to other motion planners is straightforward and depends on the
needs of the planner
Related papers
- Theory of Optimal Learning Rate Schedules and Scaling Laws for a Random Feature Model [19.00191673972499]
We explore a solvable model of optimal learning rate schedules for a powerlaw random feature model trained with gradient descent (SGD)<n>In the hard phase, the optimal schedule resembles warmup-stable-decay with constant (in $T$) initial learning rate and performed over a vanishing fraction of training steps.<n>Our model also predicts the compute-optimal scaling laws (where model size and training steps are chosen) in both easy and hard regimes.
arXiv Detail & Related papers (2026-02-04T17:11:36Z) - Multi-robot Path Planning and Scheduling via Model Predictive Optimal Transport (MPC-OT) [5.013737017051114]
We consider a setup where $M robots are tasked to navigate to $M targets in a common space with deadlock obstacles.<n>We derive a strategy based on optimal plannings and guarantee non-overlapping trajectories.<n>We show that a temporal structure can be integrated into optimal transport with the help of textitremodelplans and textitremodelplans as predictive.
arXiv Detail & Related papers (2025-08-28T20:47:33Z) - Enhanced UAV Path Planning Using the Tangent Intersection Guidance (TIG) Algorithm [0.0]
Tangent Intersection Guidance (TIG) is an advanced approach for UAV path planning in both static and dynamic environments.<n>It generates two sub-paths for each threat, selects the optimal route based on an algorithm rule, and iteratively refines the path until the target is reached.<n>TIG demonstrates efficient real-time path planning capabilities for collision avoidance, outperforming APF and Dynamic APPATT algorithms.
arXiv Detail & Related papers (2025-08-26T12:11:59Z) - Fast and scalable Wasserstein-1 neural optimal transport solver for single-cell perturbation prediction [55.89763969583124]
Optimal transport theory provides a principled framework for constructing such mappings.
We propose a novel optimal transport solver based on Wasserstein-1.
Our experiments demonstrate that the proposed solver can mimic the $W$ OT solvers in finding a unique and monotonic" map on 2D datasets.
arXiv Detail & Related papers (2024-11-01T14:23:19Z) - Decentralized Social Navigation with Non-Cooperative Robots via Bi-Level
Optimization [11.638394339813154]
This paper presents a fully decentralized approach for realtime non-cooperative multi-robot navigation in social mini-games.
Our contribution is a new realtime bi-level optimization algorithm, in which the top-level optimization consists of computing a fair and collision-free ordering.
We successfully deploy the proposed algorithm in the real world using F$1/10$ robots, a Clearpath Jackal, and a Boston Dynamics Spot.
arXiv Detail & Related papers (2023-06-15T02:18:21Z) - Optimal Horizon-Free Reward-Free Exploration for Linear Mixture MDPs [60.40452803295326]
We propose a new reward-free algorithm for learning linear Markov decision processes (MDPs)
At the core of our algorithm is uncertainty-weighted value-targeted regression with exploration-driven pseudo-reward.
We show that our algorithm only needs to explore $tilde O( d2varepsilon-2)$ episodes to find an $varepsilon$-optimal policy.
arXiv Detail & Related papers (2023-03-17T17:53:28Z) - Matching Pursuit Based Scheduling for Over-the-Air Federated Learning [67.59503935237676]
This paper develops a class of low-complexity device scheduling algorithms for over-the-air learning via the method of federated learning.
Compared to the state-of-the-art proposed scheme, the proposed scheme poses a drastically lower efficiency system.
The efficiency of the proposed scheme is confirmed via experiments on the CIFAR dataset.
arXiv Detail & Related papers (2022-06-14T08:14:14Z) - Near-Optimal Goal-Oriented Reinforcement Learning in Non-Stationary
Environments [40.027926921772355]
We study the study of dynamic regret for goal-oriented reinforcement learning.
The different roles of $Delta_c$ and $Delta_P$ in this lower bound inspire us to design algorithms that estimate costs and transitions separately.
arXiv Detail & Related papers (2022-05-25T20:29:01Z) - T*$\varepsilon$ -- Bounded-Suboptimal Efficient Motion Planning for
Minimum-Time Planar Curvature-Constrained Systems [7.277760003553328]
We consider the problem of finding collision-free paths for curvature-constrained systems in the presence of obstacles.
We show that by finding bounded-suboptimal solutions, one can dramatically reduce the number of time-optimal transitions used.
arXiv Detail & Related papers (2022-04-04T17:38:36Z) - Asynchronous Stochastic Optimization Robust to Arbitrary Delays [54.61797739710608]
We consider optimization with delayed gradients where, at each time stept$, the algorithm makes an update using a stale computation - d_t$ for arbitrary delay $d_t gradient.
Our experiments demonstrate the efficacy and robustness of our algorithm in cases where the delay distribution is skewed or heavy-tailed.
arXiv Detail & Related papers (2021-06-22T15:50:45Z) - Randomized Exploration for Reinforcement Learning with General Value
Function Approximation [122.70803181751135]
We propose a model-free reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm.
Our algorithm drives exploration by simply perturbing the training data with judiciously chosen i.i.d. scalar noises.
We complement the theory with an empirical evaluation across known difficult exploration tasks.
arXiv Detail & Related papers (2021-06-15T02:23:07Z) - Minimax Regret for Stochastic Shortest Path [63.45407095296692]
We study the Shortest Path (SSP) problem in which an agent has to reach a goal state in minimum total expected cost.
We show that the minimax regret for this setting is $widetilde O(B_star sqrt|S| |A| K)$ where $B_star$ is a bound on the expected cost of the optimal policy from any state.
Our algorithm runs in-time per episode, and is based on a novel reduction to reinforcement learning in finite-horizon MDPs.
arXiv Detail & Related papers (2021-03-24T10:11:49Z) - Autonomous Drone Racing with Deep Reinforcement Learning [39.757652701917166]
In many robotic tasks, such as drone racing, the goal is to travel through a set of waypoints as fast as possible.
A key challenge is planning the minimum-time trajectory, which is typically solved by assuming perfect knowledge of the waypoints to pass in advance.
In this work, a new approach to minimum-time trajectory generation for quadrotors is presented.
arXiv Detail & Related papers (2021-03-15T18:05:49Z) - Nearly Minimax Optimal Reward-free Reinforcement Learning [88.75843804630772]
We study the reward-free reinforcement learning framework, which is particularly suitable for batch reinforcement learning and scenarios where one needs policies for multiple reward functions.
We give a new efficient algorithm, textbfStaged textbfSampling + textbfTruncated textbfPlanning (algoname), which interacts with the environment at most $Oleft( fracS2Aepsilon2textpolylogleft(fracSAHepsilon2
arXiv Detail & Related papers (2020-10-12T17:51:19Z) - Path Planning Followed by Kinodynamic Smoothing for Multirotor Aerial
Vehicles (MAVs) [61.94975011711275]
We propose a geometrically based motion planning technique textquotedblleft RRT*textquotedblright; for this purpose.
In the proposed technique, we modified original RRT* introducing an adaptive search space and a steering function.
We have tested the proposed technique in various simulated environments.
arXiv Detail & Related papers (2020-08-29T09:55:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.