Run Time Bounds for Integer-Valued OneMax Functions
- URL: http://arxiv.org/abs/2307.11855v2
- Date: Mon, 9 Oct 2023 08:49:24 GMT
- Title: Run Time Bounds for Integer-Valued OneMax Functions
- Authors: Jonathan Gadea Harder, Timo K\"otzing, Xiaoyue Li, Aishwarya
Radhakrishnan
- Abstract summary: We consider the search space $mathbbZn$ in terms of multi-valued decision variables $0,ldots,r-1n$.
We show that RLS with step size adaptation achieves an optimization time of $Theta(n cdot log(|a|_1))$.
We conclude with an empirical analysis, comparing the above algorithms also with a variant of CMA-ES for discrete search spaces.
- Score: 0.9012198585960443
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While most theoretical run time analyses of discrete randomized search
heuristics focused on finite search spaces, we consider the search space
$\mathbb{Z}^n$. This is a further generalization of the search space of
multi-valued decision variables $\{0,\ldots,r-1\}^n$.
We consider as fitness functions the distance to the (unique) non-zero
optimum $a$ (based on the $L_1$-metric) and the \ooea which mutates by applying
a step-operator on each component that is determined to be varied. For changing
by $\pm 1$, we show that the expected optimization time is $\Theta(n \cdot
(|a|_{\infty} + \log(|a|_H)))$. In particular, the time is linear in the
maximum value of the optimum $a$. Employing a different step operator which
chooses a step size from a distribution so heavy-tailed that the expectation is
infinite, we get an optimization time of $O(n \cdot \log^2 (|a|_1) \cdot
\left(\log (\log (|a|_1))\right)^{1 + \epsilon})$.
Furthermore, we show that RLS with step size adaptation achieves an
optimization time of $\Theta(n \cdot \log(|a|_1))$.
We conclude with an empirical analysis, comparing the above algorithms also
with a variant of CMA-ES for discrete search spaces.
Related papers
- Mini-Batch Kernel $k$-means [4.604003661048267]
A single iteration of our algorithm takes $widetildeO(kb2)$ time, significantly faster than the $O(n2)$ time required by the full batch kernel $k$-means.
Experiments demonstrate that our algorithm consistently achieves a 10-100x speedup with minimal loss in quality.
arXiv Detail & Related papers (2024-10-08T10:59:14Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Convergence Rates for Non-Log-Concave Sampling and Log-Partition
Estimation [0.0]
It is known that for $m$times differentiable functions in $d$, the optimal rate for algorithms with $n$(m/d) is to be $n(m/d).
We show that similar rates for sampling and computation are possible, and whether they can be realized in time with an independent rate of $d$.
arXiv Detail & Related papers (2023-03-06T15:53:44Z) - Mind the gap: Achieving a super-Grover quantum speedup by jumping to the
end [114.3957763744719]
We present a quantum algorithm that has rigorous runtime guarantees for several families of binary optimization problems.
We show that the algorithm finds the optimal solution in time $O*(2(0.5-c)n)$ for an $n$-independent constant $c$.
We also show that for a large fraction of random instances from the $k$-spin model and for any fully satisfiable or slightly frustrated $k$-CSP formula, statement (a) is the case.
arXiv Detail & Related papers (2022-12-03T02:45:23Z) - Sampling from Log-Concave Distributions with Infinity-Distance
Guarantees and Applications to Differentially Private Optimization [33.38289436686841]
We present an algorithm that outputs a point from a distributionO(varepsilon)$close to $$ in infinity-distance.
We also present a "soft-pi" version of the Dikin walk which may be independent interest.
arXiv Detail & Related papers (2021-11-07T13:44:50Z) - Asynchronous Stochastic Optimization Robust to Arbitrary Delays [54.61797739710608]
We consider optimization with delayed gradients where, at each time stept$, the algorithm makes an update using a stale computation - d_t$ for arbitrary delay $d_t gradient.
Our experiments demonstrate the efficacy and robustness of our algorithm in cases where the delay distribution is skewed or heavy-tailed.
arXiv Detail & Related papers (2021-06-22T15:50:45Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - A Rigorous Runtime Analysis of the $(1 + (\lambda, \lambda))$ GA on Jump
Functions [8.34061303235504]
We conduct the first runtime analysis of this algorithm on a multimodal problem class, the jump functions benchmark.
For the isolated problem of leaving the local optimum of jump functions, we determine provably optimal parameters that lead to a runtime of $(n/k)k/2 eTheta(k)$.
arXiv Detail & Related papers (2020-04-14T17:54:12Z) - Agnostic Q-learning with Function Approximation in Deterministic
Systems: Tight Bounds on Approximation Error and Sample Complexity [94.37110094442136]
We study the problem of agnostic $Q$-learning with function approximation in deterministic systems.
We show that if $delta = Oleft(rho/sqrtdim_Eright)$, then one can find the optimal policy using $Oleft(dim_Eright)$.
arXiv Detail & Related papers (2020-02-17T18:41:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.