When Non-Elitism Meets Time-Linkage Problems
- URL: http://arxiv.org/abs/2104.06831v1
- Date: Wed, 14 Apr 2021 13:03:42 GMT
- Title: When Non-Elitism Meets Time-Linkage Problems
- Authors: Weijie Zheng, Qiaozhi Zhang, Huanhuan Chen, Xin Yao
- Abstract summary: We analyze on the influence of the non-elitism via comparing the performance of the elitist (1+$lambda$)EA and its non-elitist counterpart (1,$lambda$)EA.
We prove that with probability $1$, (1,$lambda$)EA can reach the global optimum and its expected runtime is $O(n3+clog n)$ with $lambda=c log_fracee-1 n$ for the constant $cge 1$.
- Score: 19.798298260257432
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many real-world applications have the time-linkage property, and the only
theoretical analysis is recently given by Zheng, et al. (TEVC 2021) on their
proposed time-linkage OneMax problem, OneMax$_{(0,1^n)}$. However, only two
elitist algorithms (1+1)EA and ($\mu$+1)EA are analyzed, and it is unknown
whether the non-elitism mechanism could help to escape the local optima existed
in OneMax$_{(0,1^n)}$. In general, there are few theoretical results on the
benefits of the non-elitism in evolutionary algorithms. In this work, we
analyze on the influence of the non-elitism via comparing the performance of
the elitist (1+$\lambda$)EA and its non-elitist counterpart (1,$\lambda$)EA. We
prove that with probability $1-o(1)$ (1+$\lambda$)EA will get stuck in the
local optima and cannot find the global optimum, but with probability $1$,
(1,$\lambda$)EA can reach the global optimum and its expected runtime is
$O(n^{3+c}\log n)$ with $\lambda=c \log_{\frac{e}{e-1}} n$ for the constant
$c\ge 1$. Noting that a smaller offspring size is helpful for escaping from the
local optima, we further resort to the compact genetic algorithm where only two
individuals are sampled to update the probabilistic model, and prove its
expected runtime of $O(n^3\log n)$. Our computational experiments also verify
the efficiency of the two non-elitist algorithms.
Related papers
- Hyper-Heuristics Can Profit From Global Variation Operators [12.774575491521926]
We show that the Move Acceptance Hyper-Heuristic (MAHH) leaves the local optimum of the multimodal CLIFF benchmark with remarkable efficiency.
We also show that replacing the local one-bit mutation operator in the MAHH with the global bit-wise mutation operator, commonly used in EAs, yields a runtime of $min1, O(fraceln(n)m)m, O(nm)$ on JUMP functions.
arXiv Detail & Related papers (2024-07-19T12:10:05Z) - A Scalable Algorithm for Individually Fair K-means Clustering [77.93955971520549]
We present a scalable algorithm for the individually fair ($p$, $k$)-clustering problem introduced by Jung et al. and Mahabadi et al.
A clustering is then called individually fair if it has centers within distance $delta(x)$ of $x$ for each $xin P$.
We show empirically that not only is our algorithm much faster than prior work, but it also produces lower-cost solutions.
arXiv Detail & Related papers (2024-02-09T19:01:48Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - How the Move Acceptance Hyper-Heuristic Copes With Local Optima: Drastic
Differences Between Jumps and Cliffs [6.793248433673384]
We show that the Move Acceptance Hyper-Heuristic (MAHH) leaves the local optimum of the multimodal cliff benchmark with remarkable efficiency.
We also show that the MAHH with the global bit-wise mutation operator instead of the local one-bit operator optimize jump functions in time.
This suggests that combining several ways to cope with local optima can be a fruitful approach.
arXiv Detail & Related papers (2023-04-20T15:57:33Z) - Near-Optimal Non-Convex Stochastic Optimization under Generalized
Smoothness [21.865728815935665]
Two recent works established the $O(epsilon-3)$ sample complexity to obtain an $O(epsilon)$-stationary point.
However, both require a large batch size on the order of $mathrmploy(epsilon-1)$, which is not only computationally burdensome but also unsuitable for streaming applications.
In this work, we solve the prior two problems simultaneously by revisiting a simple variant of the STORM algorithm.
arXiv Detail & Related papers (2023-02-13T00:22:28Z) - Mind the gap: Achieving a super-Grover quantum speedup by jumping to the
end [114.3957763744719]
We present a quantum algorithm that has rigorous runtime guarantees for several families of binary optimization problems.
We show that the algorithm finds the optimal solution in time $O*(2(0.5-c)n)$ for an $n$-independent constant $c$.
We also show that for a large fraction of random instances from the $k$-spin model and for any fully satisfiable or slightly frustrated $k$-CSP formula, statement (a) is the case.
arXiv Detail & Related papers (2022-12-03T02:45:23Z) - TURF: A Two-factor, Universal, Robust, Fast Distribution Learning
Algorithm [64.13217062232874]
One of its most powerful and successful modalities approximates every distribution to an $ell$ distance essentially at most a constant times larger than its closest $t$-piece degree-$d_$.
We provide a method that estimates this number near-optimally, hence helps approach the best possible approximation.
arXiv Detail & Related papers (2022-02-15T03:49:28Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Optimal Mutation Rates for the $(1+\lambda)$ EA on OneMax [1.0965065178451106]
We extend the analysis of optimal mutation rates to two variants of the OneMax problem.
We compute for all population sizes $lambda in 2i mid 0 le i le 18$ which mutation rates minimize the expected running time.
Our results do not only provide a lower bound against which we can measure common evolutionary approaches.
arXiv Detail & Related papers (2020-06-20T01:23:14Z) - Locally Private Hypothesis Selection [96.06118559817057]
We output a distribution from $mathcalQ$ whose total variation distance to $p$ is comparable to the best such distribution.
We show that the constraint of local differential privacy incurs an exponential increase in cost.
Our algorithms result in exponential improvements on the round complexity of previous methods.
arXiv Detail & Related papers (2020-02-21T18:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.