Choosing the Right Algorithm With Hints From Complexity Theory
- URL: http://arxiv.org/abs/2109.06584v2
- Date: Sun, 26 Nov 2023 11:10:08 GMT
- Title: Choosing the Right Algorithm With Hints From Complexity Theory
- Authors: Shouda Wang and Weijie Zheng and Benjamin Doerr
- Abstract summary: We show that the Metropolis algorithm is clearly the best of all algorithms regarded for reasonable problem sizes.
An artificial algorithm of this type having an $O(n log n)$ runtime leads to the result that the significance-based compact genetic algorithm (sig-cGA) can solve the DLB problem in time $O(n log n)$ with high probability.
- Score: 16.33500498939925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Choosing a suitable algorithm from the myriads of different search heuristics
is difficult when faced with a novel optimization problem. In this work, we
argue that the purely academic question of what could be the best possible
algorithm in a certain broad class of black-box optimizers can give fruitful
indications in which direction to search for good established optimization
heuristics. We demonstrate this approach on the recently proposed DLB
benchmark, for which the only known results are $O(n^3)$ runtimes for several
classic evolutionary algorithms and an $O(n^2 \log n)$ runtime for an
estimation-of-distribution algorithm. Our finding that the unary unbiased
black-box complexity is only $O(n^2)$ suggests the Metropolis algorithm as an
interesting candidate and we prove that it solves the DLB problem in quadratic
time. Since we also prove that better runtimes cannot be obtained in the class
of unary unbiased algorithms, we shift our attention to algorithms that use the
information of more parents to generate new solutions. An artificial algorithm
of this type having an $O(n \log n)$ runtime leads to the result that the
significance-based compact genetic algorithm (sig-cGA) can solve the DLB
problem also in time $O(n \log n)$ with high probability. Our experiments show
a remarkably good performance of the Metropolis algorithm, clearly the best of
all algorithms regarded for reasonable problem sizes.
Related papers
- A Scalable Algorithm for Individually Fair K-means Clustering [77.93955971520549]
We present a scalable algorithm for the individually fair ($p$, $k$)-clustering problem introduced by Jung et al. and Mahabadi et al.
A clustering is then called individually fair if it has centers within distance $delta(x)$ of $x$ for each $xin P$.
We show empirically that not only is our algorithm much faster than prior work, but it also produces lower-cost solutions.
arXiv Detail & Related papers (2024-02-09T19:01:48Z) - Private estimation algorithms for stochastic block models and mixture
models [63.07482515700984]
General tools for designing efficient private estimation algorithms.
First efficient $(epsilon, delta)$-differentially private algorithm for both weak recovery and exact recovery.
arXiv Detail & Related papers (2023-01-11T09:12:28Z) - Mind the gap: Achieving a super-Grover quantum speedup by jumping to the
end [114.3957763744719]
We present a quantum algorithm that has rigorous runtime guarantees for several families of binary optimization problems.
We show that the algorithm finds the optimal solution in time $O*(2(0.5-c)n)$ for an $n$-independent constant $c$.
We also show that for a large fraction of random instances from the $k$-spin model and for any fully satisfiable or slightly frustrated $k$-CSP formula, statement (a) is the case.
arXiv Detail & Related papers (2022-12-03T02:45:23Z) - Universal Quantum Speedup for Branch-and-Bound, Branch-and-Cut, and
Tree-Search Algorithms [13.152347996965297]
Mixed Programs (MIPs) model many optimization problems of interest in Computer Science, Operations Research, and Financial Engineering.
Branch-and-Cut algorithms, which combine Branch-and-Bound logic with cutting-plane routines, are at the core of modern MIP solvers.
We propose a quantum algorithm, Incremental-Quantum-Branch-and-Bound, with universal near-quadratic speedup over classical Branch-and-Bound algorithms for every input.
arXiv Detail & Related papers (2022-10-06T21:08:46Z) - Classical algorithms and quantum limitations for maximum cut on
high-girth graphs [6.262125516926276]
We show that every (quantum or classical) one-local algorithm achieves on $D$-regular graphs of $> 5$ a maximum cut of at most $1/2 + C/sqrtD$ for $C=1/sqrt2 approx 0.7071$.
We show that there is a classical $k$-local algorithm that achieves a value of $1/2 + C/sqrtD - O (1/sqrtk)$ for $D$-regular graphs of $> 2k+1$, where
arXiv Detail & Related papers (2021-06-10T16:28:23Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - First Steps Towards a Runtime Analysis When Starting With a Good
Solution [8.34061303235504]
In practical applications it may be possible to guess solutions that are better than random ones.
We show that different algorithms profit to a very different degree from a better initial solution.
This could suggest that evolutionary algorithms better exploiting good initial solutions are still to be found.
arXiv Detail & Related papers (2020-06-22T11:46:42Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z) - Learning Sparse Classifiers: Continuous and Mixed Integer Optimization
Perspectives [10.291482850329892]
Mixed integer programming (MIP) can be used to solve (to optimality) $ell_0$-regularized regression problems.
We propose two classes of scalable algorithms: an exact algorithm that can handlepapprox 50,000$ features in a few minutes, and approximate algorithms that can address instances with $papprox6$.
In addition, we present new estimation error bounds for $ell$-regularizeds.
arXiv Detail & Related papers (2020-01-17T18:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.