Improved Runtime Results for Simple Randomised Search Heuristics on
Linear Functions with a Uniform Constraint
- URL: http://arxiv.org/abs/2010.10885v1
- Date: Wed, 21 Oct 2020 10:42:39 GMT
- Title: Improved Runtime Results for Simple Randomised Search Heuristics on
Linear Functions with a Uniform Constraint
- Authors: Frank Neumann and Mojgan Pourhassan and Carsten Witt
- Abstract summary: We study the class of linear functions under uniform constraint and investigate the expected optimisation time of Randomised Local Search (RLS)
We prove a tight bound of $Theta(n2)$ for RLS and improve the previously best known upper bound of (1+1) EA.
- Score: 11.228244128564512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the last decade remarkable progress has been made in development of
suitable proof techniques for analysing randomised search heuristics. The
theoretical investigation of these algorithms on classes of functions is
essential to the understanding of the underlying stochastic process. Linear
functions have been traditionally studied in this area resulting in tight
bounds on the expected optimisation time of simple randomised search algorithms
for this class of problems. Recently, the constrained version of this problem
has gained attention and some theoretical results have also been obtained on
this class of problems. In this paper we study the class of linear functions
under uniform constraint and investigate the expected optimisation time of
Randomised Local Search (RLS) and a simple evolutionary algorithm called (1+1)
EA. We prove a tight bound of $\Theta(n^2)$ for RLS and improve the previously
best known upper bound of (1+1) EA from $O(n^2 \log (Bw_{\max}))$ to $O(n^2\log
B)$ in expectation and to $O(n^2 \log n)$ with high probability, where
$w_{\max}$ and $B$ are the maximum weight of the linear objective function and
the bound of the uniform constraint, respectively. Also, we obtain a tight
bound of $O(n^2)$ for the (1+1) EA on a special class of instances. We
complement our theoretical studies by experimental investigations that consider
different values of $B$ and also higher mutation rates that reflect the fact
that $2$-bit flips are crucial for dealing with the uniform constraint.
Related papers
- Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Gradient-free optimization of highly smooth functions: improved analysis
and a new algorithm [87.22224691317766]
This work studies problems with zero-order noisy oracle information under the assumption that the objective function is highly smooth.
We consider two kinds of zero-order projected gradient descent algorithms.
arXiv Detail & Related papers (2023-06-03T17:05:13Z) - Stochastic Nonsmooth Convex Optimization with Heavy-Tailed Noises:
High-Probability Bound, In-Expectation Rate and Initial Distance Adaptation [22.758674468435302]
In a heavy-tailed noise regime, the difference between the gradient and the true rate is assumed to have a finite $p-th moment.
This paper provides a comprehensive analysis of nonsmooth convex optimization with heavy-tailed noises.
arXiv Detail & Related papers (2023-03-22T03:05:28Z) - Mind the gap: Achieving a super-Grover quantum speedup by jumping to the
end [114.3957763744719]
We present a quantum algorithm that has rigorous runtime guarantees for several families of binary optimization problems.
We show that the algorithm finds the optimal solution in time $O*(2(0.5-c)n)$ for an $n$-independent constant $c$.
We also show that for a large fraction of random instances from the $k$-spin model and for any fully satisfiable or slightly frustrated $k$-CSP formula, statement (a) is the case.
arXiv Detail & Related papers (2022-12-03T02:45:23Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z) - Focused Jump-and-Repair Constraint Handling for Fixed-Parameter
Tractable Graph Problems Closed Under Induced Subgraphs [3.495114525631289]
We investigate a (1+1)EA equipped with a tailored jump-and-repair operation that can be used to probabilistically repair infeasible offspring in graph problems.
arXiv Detail & Related papers (2022-03-25T19:36:02Z) - A stochastic linearized proximal method of multipliers for convex
stochastic optimization with expectation constraints [8.133190610747974]
We present a computable approximation type algorithm, namely the linearized proximal convex method of multipliers.
Some preliminary numerical results demonstrate the performance of the proposed algorithm.
arXiv Detail & Related papers (2021-06-22T07:24:17Z) - Randomized Stochastic Variance-Reduced Methods for Stochastic Bilevel
Optimization [62.87181271021217]
We consider non-SBO problems that have many applications in machine learning.
This paper proposes fast randomized algorithms for non-SBO problems.
arXiv Detail & Related papers (2021-05-05T18:28:42Z) - Analysis of Evolutionary Algorithms on Fitness Function with
Time-linkage Property [27.660240128423176]
In real-world applications, many optimization problems have the time-linkage property, that is, the objective function value relies on the current solution as well as the historical solutions.
This paper takes the first step to rigorously analyze evolutionary algorithms for time-linkage functions.
arXiv Detail & Related papers (2020-04-26T07:56:40Z) - Hardness of Random Optimization Problems for Boolean Circuits,
Low-Degree Polynomials, and Langevin Dynamics [78.46689176407936]
We show that families of algorithms fail to produce nearly optimal solutions with high probability.
For the case of Boolean circuits, our results improve the state-of-the-art bounds known in circuit complexity theory.
arXiv Detail & Related papers (2020-04-25T05:45:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.