A Simplified Run Time Analysis of the Univariate Marginal Distribution
Algorithm on LeadingOnes
- URL: http://arxiv.org/abs/2004.04978v1
- Date: Fri, 10 Apr 2020 10:20:05 GMT
- Title: A Simplified Run Time Analysis of the Univariate Marginal Distribution
Algorithm on LeadingOnes
- Authors: Benjamin Doerr, Martin Krejca
- Abstract summary: We prove a stronger run time guarantee for the univariate distribution algorithm (UMDA)
We show further run time gains from small selection rates.
Under similar assumptions, we prove that a bound that matches our upper bound up to constant factors holds with high probability.
- Score: 9.853329403413701
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With elementary means, we prove a stronger run time guarantee for the
univariate marginal distribution algorithm (UMDA) optimizing the LeadingOnes
benchmark function in the desirable regime with low genetic drift. If the
population size is at least quasilinear, then, with high probability, the UMDA
samples the optimum within a number of iterations that is linear in the problem
size divided by the logarithm of the UMDA's selection rate. This improves over
the previous guarantee, obtained by Dang and Lehre (2015) via the deep
level-based population method, both in terms of the run time and by
demonstrating further run time gains from small selection rates. With similar
arguments as in our upper-bound analysis, we also obtain the first lower bound
for this problem. Under similar assumptions, we prove that a bound that matches
our upper bound up to constant factors holds with high probability.
Related papers
- On the Estimation Performance of Generalized Power Method for
Heteroscedastic Probabilistic PCA [21.9585534723895]
We show that, given a suitable iterate between the GPMs generated by at least geometrically bound to some threshold, the GPMs decrease to some threshold with the residual part of certain "-resi decomposition"
In this paper, we demonstrate the superior performance with sub-Gaussian noise settings using the PCA technique.
arXiv Detail & Related papers (2023-12-06T11:41:17Z) - Symmetric Mean-field Langevin Dynamics for Distributional Minimax
Problems [78.96969465641024]
We extend mean-field Langevin dynamics to minimax optimization over probability distributions for the first time with symmetric and provably convergent updates.
We also study time and particle discretization regimes and prove a new uniform-in-time propagation of chaos result.
arXiv Detail & Related papers (2023-12-02T13:01:29Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - Asymptotically Unbiased Instance-wise Regularized Partial AUC
Optimization: Theory and Algorithm [101.44676036551537]
One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC) measures the average performance of a binary classifier.
Most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable.
We present a simpler reformulation of the PAUC problem via distributional robust optimization AUC.
arXiv Detail & Related papers (2022-10-08T08:26:22Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Utilising the CLT Structure in Stochastic Gradient based Sampling :
Improved Analysis and Faster Algorithms [14.174806471635403]
We consider approximations of sampling algorithms, such as Gradient Langevin Dynamics (SGLD) and the Random Batch Method (RBM) for Interacting Particle Dynamcs (IPD)
We observe that the noise introduced by the approximation is nearly Gaussian due to the Central Limit Theorem (CLT) while the driving Brownian motion is exactly Gaussian.
We harness this structure to absorb the approximation error inside the diffusion process, and obtain improved convergence guarantees for these algorithms.
arXiv Detail & Related papers (2022-06-08T10:17:40Z) - Accelerated and instance-optimal policy evaluation with linear function
approximation [17.995515643150657]
Existing algorithms fail to match at least one of these lower bounds.
We develop an accelerated, variance-reduced fast temporal difference algorithm that simultaneously matches both lower bounds and attains a strong notion of instance-optimality.
arXiv Detail & Related papers (2021-12-24T17:21:04Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Many-Objective Estimation of Distribution Optimization Algorithm Based
on WGAN-GP [1.2461503242570644]
EDA can better solve multi-objective optimal problems (MOPs)
We generate the new population by Wasserstein Generative Adversarial Networks-Gradient Penalty (WGAN-GP)
arXiv Detail & Related papers (2020-03-16T03:14:59Z) - An Optimal Multistage Stochastic Gradient Method for Minimax Problems [8.615625517708324]
We study the minimax optimization problem in the smooth and strongly convex-strongly concave setting.
We first analyze the Gradient Descent Ascent (GDA) method with constant stepsize.
We propose a multistage variant of GDA that runs in multiple stages with a particular learning rate decay schedule.
arXiv Detail & Related papers (2020-02-13T18:01:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.