A new derivative-free optimization method: Gaussian Crunching Search
- URL: http://arxiv.org/abs/2307.14359v1
- Date: Mon, 24 Jul 2023 16:17:53 GMT
- Title: A new derivative-free optimization method: Gaussian Crunching Search
- Authors: Benny Wong
- Abstract summary: We introduce a novel optimization method called Gaussian Crunching Search (GCS)
Inspired by the behaviour of particles in a Gaussian distribution, GCS aims to efficiently explore the solution space and converge towards the global optimum.
This research paper serves as a valuable resource for researchers, practitioners, and students interested in optimization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optimization methods are essential in solving complex problems across various
domains. In this research paper, we introduce a novel optimization method
called Gaussian Crunching Search (GCS). Inspired by the behaviour of particles
in a Gaussian distribution, GCS aims to efficiently explore the solution space
and converge towards the global optimum. We present a comprehensive analysis of
GCS, including its working mechanism, and potential applications. Through
experimental evaluations and comparisons with existing optimization methods, we
highlight the advantages and strengths of GCS. This research paper serves as a
valuable resource for researchers, practitioners, and students interested in
optimization, providing insights into the development and potential of Gaussian
Crunching Search as a new and promising approach.
Related papers
- Rapid optimization in high dimensional space by deep kernel learning augmented genetic algorithms [0.26716003713321473]
Deep Kernel Learning (DKL) efficiently navigates the spaces of preselected candidate structures but lacks generative capabilities.
This study introduces an approach that amalgamates the generative power of GAs to create new candidates with the efficiency of DKL-based surrogate models.
We demonstrate the effectiveness of this approach through the optimization of the FerroSIM model, showcasing its broad applicability to diverse challenges.
arXiv Detail & Related papers (2024-10-04T06:18:17Z) - Optimizing Feature Selection with Genetic Algorithms: A Review of Methods and Applications [4.395397502990339]
Genetic algorithms (GAs) have been proposed to provide remedies for drawbacks by avoiding local optima and improving the selection process itself.
This manuscript presents a sweeping review on GA-based feature selection techniques in applications and their effectiveness across different domains.
arXiv Detail & Related papers (2024-09-05T22:28:42Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - Enhancing Gaussian Process Surrogates for Optimization and Posterior Approximation via Random Exploration [2.984929040246293]
novel noise-free Bayesian optimization strategies that rely on a random exploration step to enhance the accuracy of Gaussian process surrogate models.
New algorithms retain the ease of implementation of the classical GP-UCB, but an additional exploration step facilitates their convergence.
arXiv Detail & Related papers (2024-01-30T14:16:06Z) - Enhancing Optimization Performance: A Novel Hybridization of Gaussian
Crunching Search and Powell's Method for Derivative-Free Optimization [0.0]
We present a novel approach to enhance optimization performance through the hybridization of Gaussian Crunching Search (GCS) and Powell's Method for derivative-free optimization.
This hybrid approach opens up new possibilities for optimizing complex systems and finding optimal solutions in a range of applications.
arXiv Detail & Related papers (2023-08-09T01:27:04Z) - Extrinsic Bayesian Optimizations on Manifolds [1.3477333339913569]
We propose an extrinsic Bayesian optimization (eBO) framework for general optimization problems on Euclid manifold.
Our approach is to employ extrinsic Gaussian processes by first embedding the manifold onto some higher dimensionalean space.
This leads to efficient and scalable algorithms for optimization over complex manifold.
arXiv Detail & Related papers (2022-12-21T06:10:12Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Socio-cognitive Optimization of Time-delay Control Problems using
Evolutionary Metaheuristics [89.24951036534168]
Metaheuristics are universal optimization algorithms which should be used for solving difficult problems, unsolvable by classic approaches.
In this paper we aim at constructing novel socio-cognitive metaheuristic based on castes, and apply several versions of this algorithm to optimization of time-delay system model.
arXiv Detail & Related papers (2022-10-23T22:21:10Z) - Harnessing Heterogeneity: Learning from Decomposed Feedback in Bayesian
Modeling [68.69431580852535]
We introduce a novel GP regression to incorporate the subgroup feedback.
Our modified regression has provably lower variance -- and thus a more accurate posterior -- compared to previous approaches.
We execute our algorithm on two disparate social problems.
arXiv Detail & Related papers (2021-07-07T03:57:22Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.