DeepHive: A multi-agent reinforcement learning approach for automated
discovery of swarm-based optimization policies
- URL: http://arxiv.org/abs/2304.04751v1
- Date: Wed, 29 Mar 2023 18:08:08 GMT
- Title: DeepHive: A multi-agent reinforcement learning approach for automated
discovery of swarm-based optimization policies
- Authors: Eloghosa Ikponmwoba and Ope Owoyele
- Abstract summary: The state of each agent within the swarm is defined as its current position and function value within a design space.
The proposed approach is tested on various benchmark optimization functions and compared to the performance of other global optimization strategies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an approach for designing swarm-based optimizers for the global
optimization of expensive black-box functions. In the proposed approach, the
problem of finding efficient optimizers is framed as a reinforcement learning
problem, where the goal is to find optimization policies that require a few
function evaluations to converge to the global optimum. The state of each agent
within the swarm is defined as its current position and function value within a
design space and the agents learn to take favorable actions that maximize
reward, which is based on the final value of the objective function. The
proposed approach is tested on various benchmark optimization functions and
compared to the performance of other global optimization strategies.
Furthermore, the effect of changing the number of agents, as well as the
generalization capabilities of the trained agents are investigated. The results
show superior performance compared to the other optimizers, desired scaling
when the number of agents is varied, and acceptable performance even when
applied to unseen functions. On a broader scale, the results show promise for
the rapid development of domain-specific optimizers.
Related papers
- Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - Advancements in Optimization: Adaptive Differential Evolution with
Diversification Strategy [0.0]
The study employs single-objective optimization in a two-dimensional space and runs ADEDS on each of the benchmark functions with multiple iterations.
ADEDS consistently outperforms standard DE for a variety of optimization challenges, including functions with numerous local optima, plate-shaped, valley-shaped, stretched-shaped, and noisy functions.
arXiv Detail & Related papers (2023-10-02T10:05:41Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Bayesian Optimization with Informative Covariance [13.113313427848828]
We propose novel informative covariance functions for optimization, leveraging nonstationarity to encode preferences for certain regions of the search space.
We demonstrate that the proposed functions can increase the sample efficiency of Bayesian optimization in high dimensions, even under weak prior information.
arXiv Detail & Related papers (2022-08-04T15:05:11Z) - On the development of a Bayesian optimisation framework for complex
unknown systems [11.066706766632578]
This paper studies and compares common Bayesian optimisation algorithms empirically on a range of synthetic test functions.
It investigates the choice of acquisition function and number of training samples, exact calculation of acquisition functions and Monte Carlo based approaches.
arXiv Detail & Related papers (2022-07-19T09:50:34Z) - Inverse Bayesian Optimization: Learning Human Search Strategies in a
Sequential Optimization Task [0.10499611180329801]
In this paper, we explore the inverse problem of Bayesian optimization.
We estimate the agent's latent acquisition function based on observed search paths.
We illustrate our methods by analyzing human behavior from an experiment.
arXiv Detail & Related papers (2021-04-16T15:40:34Z) - Are we Forgetting about Compositional Optimisers in Bayesian
Optimisation? [66.39551991177542]
This paper presents a sample methodology for global optimisation.
Within this, a crucial performance-determiningtrivial is maximising the acquisition function.
We highlight the empirical advantages of the approach to optimise functionation across 3958 individual experiments.
arXiv Detail & Related papers (2020-12-15T12:18:38Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Composition of kernel and acquisition functions for High Dimensional
Bayesian Optimization [0.1749935196721634]
We use the addition-ality of the objective function into mapping both the kernel and the acquisition function of the Bayesian Optimization.
This ap-proach makes more efficient the learning/updating of the probabilistic surrogate model.
Results are presented for real-life application, that is the control of pumps in urban water distribution systems.
arXiv Detail & Related papers (2020-03-09T15:45:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.