Directed particle swarm optimization with Gaussian-process-based
function forecasting
- URL: http://arxiv.org/abs/2102.04172v2
- Date: Tue, 9 Feb 2021 08:31:07 GMT
- Title: Directed particle swarm optimization with Gaussian-process-based
function forecasting
- Authors: Johannes Jakubik, Adrian Binding, Stefan Feuerriegel
- Abstract summary: Particle swarm optimization (PSO) is an iterative search method that moves a set of candidate solution around a search-space towards the best known global and local solutions with randomized step lengths.
We show that our algorithm attains desirable properties for exploratory and exploitative behavior.
- Score: 15.733136147164032
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Particle swarm optimization (PSO) is an iterative search method that moves a
set of candidate solution around a search-space towards the best known global
and local solutions with randomized step lengths. PSO frequently accelerates
optimization in practical applications, where gradients are not available and
function evaluations expensive. Yet the traditional PSO algorithm ignores the
potential knowledge that could have been gained of the objective function from
the observations by individual particles. Hence, we draw upon concepts from
Bayesian optimization and introduce a stochastic surrogate model of the
objective function. That is, we fit a Gaussian process to past evaluations of
the objective function, forecast its shape and then adapt the particle
movements based on it. Our computational experiments demonstrate that baseline
implementations of PSO (i.e., SPSO2011) are outperformed. Furthermore, compared
to, state-of-art surrogate-assisted evolutionary algorithms, we achieve
substantial performance improvements on several popular benchmark functions.
Overall, we find that our algorithm attains desirable properties for
exploratory and exploitative behavior.
Related papers
- Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - Enhancing Gaussian Process Surrogates for Optimization and Posterior Approximation via Random Exploration [2.984929040246293]
novel noise-free Bayesian optimization strategies that rely on a random exploration step to enhance the accuracy of Gaussian process surrogate models.
New algorithms retain the ease of implementation of the classical GP-UCB, but an additional exploration step facilitates their convergence.
arXiv Detail & Related papers (2024-01-30T14:16:06Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - PAO: A general particle swarm algorithm with exact dynamics and
closed-form transition densities [0.0]
Particle swarm optimisation (PSO) approaches have proven to be highly effective in a number of application areas.
In this work, a highly-general, interpretable variant of the PSO algorithm -- particle attractor algorithm (PAO) -- is proposed.
arXiv Detail & Related papers (2023-04-28T16:19:27Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - A Particle-based Sparse Gaussian Process Optimizer [5.672919245950197]
We present a new swarm-swarm-based framework utilizing the underlying dynamical process of descent.
The biggest advantage of this approach is greater exploration around the current state before deciding descent descent.
arXiv Detail & Related papers (2022-11-26T09:06:15Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - A Global Stochastic Optimization Particle Filter Algorithm [0.0]
We introduce a new algorithm for expected log-likelihood in situations where the objective function is multi-modal.
With high probability, G-PFSO successfully finds the objective function and converges to its global maximizer at the optimal rate.
arXiv Detail & Related papers (2020-07-09T14:17:43Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Composition of kernel and acquisition functions for High Dimensional
Bayesian Optimization [0.1749935196721634]
We use the addition-ality of the objective function into mapping both the kernel and the acquisition function of the Bayesian Optimization.
This ap-proach makes more efficient the learning/updating of the probabilistic surrogate model.
Results are presented for real-life application, that is the control of pumps in urban water distribution systems.
arXiv Detail & Related papers (2020-03-09T15:45:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.