MRRT: Multiple Rapidly-Exploring Random Trees for Fast Online Replanning
in Dynamic Environments
- URL: http://arxiv.org/abs/2104.11059v1
- Date: Thu, 22 Apr 2021 13:41:48 GMT
- Title: MRRT: Multiple Rapidly-Exploring Random Trees for Fast Online Replanning
in Dynamic Environments
- Authors: Zongyuan Shen, James P. Wilson, Ryan Harvey and Shalabh Gupta
- Abstract summary: MRRT algorithm is built upon the RRT algorithm with a multi-tree structure.
RRT algorithm is applied to find the initial solution based on partial knowledge of the environment.
New obstacle configurations are collected by the robot's sensor and used to replan the path.
- Score: 1.290382979353427
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a novel algorithm, called MRRT, which uses multiple
rapidly-exploring random trees for fast online replanning of autonomous
vehicles in dynamic environments with moving obstacles. The proposed algorithm
is built upon the RRT algorithm with a multi-tree structure. At the beginning,
the RRT algorithm is applied to find the initial solution based on partial
knowledge of the environment. Then, the robot starts to execute this path. At
each iteration, the new obstacle configurations are collected by the robot's
sensor and used to replan the path. This new information can come from unknown
static obstacles (e.g., seafloor layout) as well as moving obstacles. Then, to
accommodate the environmental changes, two procedures are adopted: 1) edge
pruning, and 2) tree regrowing. Specifically, the edge pruning procedure checks
the collision status through the tree and only removes the invalid edges while
maintaining the tree structure of already-explored regions. Due to removal of
invalid edges, the tree could be broken into multiple disjoint trees. As such,
the RRT algorithm is applied to regrow the trees. Specifically, a sample is
created randomly and joined to all the disjoint trees in its local neighborhood
by connecting to the nearest nodes. Finally, a new solution is found for the
robot. The advantages of the proposed MRRT algorithm are as follows: i) retains
the maximal tree structure by only pruning the edges which collide with the
obstacles, ii) guarantees probabilistic completeness, and iii) is computational
efficient for fast replanning since all disjoint trees are maintained for
future connections and expanded simultaneously.
Related papers
- Unbiased and Efficient Sampling of Dependency Trees [0.0]
Most treebanks require that every valid dependency tree has a single edge coming out of the ROOT node.
Zmigrod et al. have recently proposed algorithms for sampling with and without replacement from the single-root dependency tree distribution.
We show that their fastest algorithm for sampling with replacement, Wilson-RC, is in fact biased.
arXiv Detail & Related papers (2022-05-25T09:57:28Z) - Robustifying Algorithms of Learning Latent Trees with Vector Variables [92.18777020401484]
We present the sample complexities of Recursive Grouping (RG) and Chow-Liu Recursive Grouping (CLRG)
We robustify RG, CLRG, Neighbor Joining (NJ) and Spectral NJ (SNJ) by using the truncated inner product.
We derive the first known instance-dependent impossibility result for structure learning of latent trees.
arXiv Detail & Related papers (2021-06-02T01:37:52Z) - Improved Branch and Bound for Neural Network Verification via Lagrangian
Decomposition [161.09660864941603]
We improve the scalability of Branch and Bound (BaB) algorithms for formally proving input-output properties of neural networks.
We present a novel activation-based branching strategy and a BaB framework, named Branch and Dual Network Bound (BaDNB)
BaDNB outperforms previous complete verification systems by a large margin, cutting average verification times by factors up to 50 on adversarial properties.
arXiv Detail & Related papers (2021-04-14T09:22:42Z) - Slow-Growing Trees [0.0]
Random Forest's performance can be matched by a single slow-growing tree (SGT)
SGT exploits the view that CART is an extreme case of an iterative weighted least square procedure.
A unifying view of Boosted Trees (BT) and Random Forests (RF) is presented.
arXiv Detail & Related papers (2021-03-02T18:37:13Z) - Growing Deep Forests Efficiently with Soft Routing and Learned
Connectivity [79.83903179393164]
This paper further extends the deep forest idea in several important aspects.
We employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.
Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3].
arXiv Detail & Related papers (2020-12-29T18:05:05Z) - Rethinking Learnable Tree Filter for Generic Feature Transform [71.77463476808585]
Learnable Tree Filter presents a remarkable approach to model structure-preserving relations for semantic segmentation.
To relax the geometric constraint, we give the analysis by reformulating it as a Markov Random Field and introduce a learnable unary term.
For semantic segmentation, we achieve leading performance (82.1% mIoU) on the Cityscapes benchmark without bells-and-whistles.
arXiv Detail & Related papers (2020-12-07T07:16:47Z) - An Efficient Adversarial Attack for Tree Ensembles [91.05779257472675]
adversarial attacks on tree based ensembles such as gradient boosting decision trees (DTs) and random forests (RFs)
We show that our method can be thousands of times faster than the previous mixed-integer linear programming (MILP) based approach.
Our code is available at https://chong-z/tree-ensemble-attack.
arXiv Detail & Related papers (2020-10-22T10:59:49Z) - Convex Polytope Trees [57.56078843831244]
convex polytope trees (CPT) are proposed to expand the family of decision trees by an interpretable generalization of their decision boundary.
We develop a greedy method to efficiently construct CPT and scalable end-to-end training algorithms for the tree parameters when the tree structure is given.
arXiv Detail & Related papers (2020-10-21T19:38:57Z) - Learning Binary Decision Trees by Argmin Differentiation [34.9154848754842]
We learn binary decision trees that partition data for some downstream task.
We do so by relaxing a mixed-integer program for the discrete parameters.
We derive customized algorithms to efficiently compute the forward and backward passes.
arXiv Detail & Related papers (2020-10-09T15:11:28Z) - Born-Again Tree Ensembles [9.307453801175177]
Tree ensembles offer a good prediction quality in various domains, but the concurrent use of multiple trees reduces the interpretability of the ensemble.
We study the process of constructing a single decision tree of minimum size that reproduces the exact same behavior as a given tree ensemble in its entire feature space.
This algorithm generates optimal born-again trees for many datasets of practical interest.
arXiv Detail & Related papers (2020-03-24T22:17:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.