An Adaptive Re-evaluation Method for Evolution Strategy under Additive Noise
- URL: http://arxiv.org/abs/2409.16757v1
- Date: Wed, 25 Sep 2024 09:10:21 GMT
- Title: An Adaptive Re-evaluation Method for Evolution Strategy under Additive Noise
- Authors: Catalin-Viorel Dinu, Yash J. Patel, Xavier Bonet-Monroig, Hao Wang,
- Abstract summary: We propose a novel method to adaptively choose the optimal re-evaluation number for function values corrupted by additive Gaussian white noise.
We experimentally compare our method to the state-of-the-art noise-handling methods for CMA-ES on a set of artificial test functions.
- Score: 3.92625489118339
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES) is one of the most advanced algorithms in numerical black-box optimization. For noisy objective functions, several approaches were proposed to mitigate the noise, e.g., re-evaluations of the same solution or adapting the population size. In this paper, we devise a novel method to adaptively choose the optimal re-evaluation number for function values corrupted by additive Gaussian white noise. We derive a theoretical lower bound of the expected improvement achieved in one iteration of CMA-ES, given an estimation of the noise level and the Lipschitz constant of the function's gradient. Solving for the maximum of the lower bound, we obtain a simple expression of the optimal re-evaluation number. We experimentally compare our method to the state-of-the-art noise-handling methods for CMA-ES on a set of artificial test functions across various noise levels, optimization budgets, and dimensionality. Our method demonstrates significant advantages in terms of the probability of hitting near-optimal function values.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - CMA-ES with Adaptive Reevaluation for Multiplicative Noise [1.3108652488669732]
We develop the reevaluation adaptation CMA-ES (RA-CMA-ES), which computes two update directions using half of the evaluations and adapts the number of reevaluations on the estimated correlation of those two update directions.
The numerical simulation shows that the RA-CMA-ES outperforms the comparative method under multiplicative noise.
arXiv Detail & Related papers (2024-05-19T07:42:10Z) - Dynamic Anisotropic Smoothing for Noisy Derivative-Free Optimization [0.0]
We propose a novel algorithm that extends the methods of ball smoothing and Gaussian smoothing for noisy derivative-free optimization.
The algorithm dynamically adapts the shape of the smoothing kernel to approximate the Hessian of the objective function around a local optimum.
arXiv Detail & Related papers (2024-05-02T21:04:20Z) - Adaptive Noise Covariance Estimation under Colored Noise using Dynamic
Expectation Maximization [1.550120821358415]
We introduce a novel brain-inspired algorithm that accurately estimates the NCM for dynamic systems subjected to colored noise.
We mathematically prove that our NCM estimator converges to the global optimum of this free energy objective.
Notably, we show that our method outperforms the best baseline (Variational Bayes) in joint noise and state estimation for high colored noise.
arXiv Detail & Related papers (2023-08-15T14:21:53Z) - Adaptive Sampling Quasi-Newton Methods for Zeroth-Order Stochastic
Optimization [1.7513645771137178]
We consider unconstrained optimization problems with no available gradient information.
We propose an adaptive sampling quasi-Newton method where we estimate the gradients of a simulation function using finite differences within a common random number framework.
We develop modified versions of a norm test and an inner product quasi-Newton test to control the sample sizes used in the approximations and provide global convergence results to the neighborhood of the optimal solution.
arXiv Detail & Related papers (2021-09-24T21:49:25Z) - A Comparison of Various Classical Optimizers for a Variational Quantum
Linear Solver [0.0]
Variational Hybrid Quantum Classical Algorithms (VHQCAs) are a class of quantum algorithms intended to run on noisy quantum devices.
These algorithms employ a parameterized quantum circuit (ansatz) and a quantum-classical feedback loop.
A classical device is used to optimize the parameters in order to minimize a cost function that can be computed far more efficiently on a quantum device.
arXiv Detail & Related papers (2021-06-16T10:40:00Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Logistic Q-Learning [87.00813469969167]
We propose a new reinforcement learning algorithm derived from a regularized linear-programming formulation of optimal control in MDPs.
The main feature of our algorithm is a convex loss function for policy evaluation that serves as a theoretically sound alternative to the widely used squared Bellman error.
arXiv Detail & Related papers (2020-10-21T17:14:31Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.