Revisiting the Complexity Analysis of Conflict-Based Search: New
Computational Techniques and Improved Bounds
- URL: http://arxiv.org/abs/2104.08759v1
- Date: Sun, 18 Apr 2021 07:46:28 GMT
- Title: Revisiting the Complexity Analysis of Conflict-Based Search: New
Computational Techniques and Improved Bounds
- Authors: Ofir Gordon, Yuval Filmus, Oren Salzman
- Abstract summary: State-of-the-art approach to computing optimal solutions is Conflict-Based Search (CBS)
We revisit the complexity analysis of CBS to provide tighter bounds on the algorithm's run-time in the worst-case.
- Score: 5.158632635415881
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The problem of Multi-Agent Path Finding (MAPF) calls for finding a set of
conflict-free paths for a fleet of agents operating in a given environment.
Arguably, the state-of-the-art approach to computing optimal solutions is
Conflict-Based Search (CBS). In this work we revisit the complexity analysis of
CBS to provide tighter bounds on the algorithm's run-time in the worst-case.
Our analysis paves the way to better pinpoint the parameters that govern (in
the worst case) the algorithm's computational complexity.
Our analysis is based on two complementary approaches: In the first approach
we bound the run-time using the size of a Multi-valued Decision Diagram (MDD)
-- a layered graph which compactly contains all possible single-agent paths
between two given vertices for a specific path length.
In the second approach we express the running time by a novel recurrence
relation which bounds the algorithm's complexity. We use generating
functions-based analysis in order to tightly bound the recurrence.
Using these technique we provide several new upper-bounds on CBS's
complexity. The results allow us to improve the existing bound on the running
time of CBS for many cases. For example, on a set of common benchmarks we
improve the upper-bound by a factor of at least $2^{10^{7}}$.
Related papers
- Query-Efficient Correlation Clustering with Noisy Oracle [17.11782578276788]
We introduce two novel formulations of online learning problems rooted in the paradigm of Pure Exploration in Combinatorial Multi-Armed Bandits (PE-CMAB)
We design algorithms that combine a sampling strategy with a classic approximation algorithm for correlation and study their theoretical guarantees.
Our results are the first examples of clustering-time algorithms that work for the case of PE-CMAB in which the underlying offline optimization problem is NP-hard.
arXiv Detail & Related papers (2024-02-02T13:31:24Z) - Decision Diagram-Based Branch-and-Bound with Caching for Dominance and
Suboptimality Detection [9.175779296469194]
This paper presents new ingredients to speed up the search by exploiting the structure of dynamic programming models.
The key idea is to prevent the repeated expansion of nodes corresponding to the same dynamic programming states by querying expansion thresholds cached throughout the search.
Experiments show that the pruning brought by this caching mechanism allows significantly reducing the number of nodes expanded by the algorithm.
arXiv Detail & Related papers (2022-11-22T10:18:33Z) - Finite-Time Error Bounds for Greedy-GQ [20.51105692499517]
We show that Greedy-GQ algorithm converges fast as finite-time error.
Our analysis provides for faster convergence step-sizes for choosing step-sizes.
arXiv Detail & Related papers (2022-09-06T15:04:57Z) - Enhanced Methods for the Weight Constrained Shortest Path Problem [18.567812400186092]
The Weight Constrained Shortest Path Problem (WCSPP) is a well-studied, yet challenging, topic in AI.
This paper presents two new solution approaches to the WCSPP on the basis of A* search.
We empirically evaluate the performance of our algorithms on a set of large and realistic problem instances.
arXiv Detail & Related papers (2022-07-29T15:32:45Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - A Two-Timescale Framework for Bilevel Optimization: Complexity Analysis
and Application to Actor-Critic [142.1492359556374]
Bilevel optimization is a class of problems which exhibit a two-level structure.
We propose a two-timescale approximation (TTSA) algorithm for tackling such a bilevel problem.
We show that a two-timescale natural actor-critic policy optimization algorithm can be viewed as a special case of our TTSA framework.
arXiv Detail & Related papers (2020-07-10T05:20:02Z) - Refined bounds for algorithm configuration: The knife-edge of dual class
approximability [94.83809668933021]
We investigate how large should a training set be to ensure that a parameter's average metrics performance over the training set is close to its expected, future performance.
We show that if this approximation holds under the L-infinity norm, we can provide strong sample complexity bounds.
We empirically evaluate our bounds in the context of integer programming, one of the most powerful tools in computer science.
arXiv Detail & Related papers (2020-06-21T15:32:21Z) - Beyond Worst-Case Analysis in Stochastic Approximation: Moment
Estimation Improves Instance Complexity [58.70807593332932]
We study oracle complexity of gradient based methods for approximation problems.
We focus on instance-dependent complexity instead of worst case complexity.
Our proposed algorithm and its analysis provide a theoretical justification for the success of moment estimation.
arXiv Detail & Related papers (2020-06-08T09:25:47Z) - Learning to Accelerate Heuristic Searching for Large-Scale Maximum
Weighted b-Matching Problems in Online Advertising [51.97494906131859]
Bipartite b-matching is fundamental in algorithm design, and has been widely applied into economic markets, labor markets, etc.
Existing exact and approximate algorithms usually fail in such settings due to either requiring intolerable running time or too much computation resource.
We propose textttNeuSearcher which leverages the knowledge learned from previously instances to solve new problem instances.
arXiv Detail & Related papers (2020-05-09T02:48:23Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.