A new derivative-free optimization method: Gaussian Crunching Search
- URL: http://arxiv.org/abs/2307.14359v1
- Date: Mon, 24 Jul 2023 16:17:53 GMT
- Title: A new derivative-free optimization method: Gaussian Crunching Search
- Authors: Benny Wong
- Abstract summary: We introduce a novel optimization method called Gaussian Crunching Search (GCS)
Inspired by the behaviour of particles in a Gaussian distribution, GCS aims to efficiently explore the solution space and converge towards the global optimum.
This research paper serves as a valuable resource for researchers, practitioners, and students interested in optimization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optimization methods are essential in solving complex problems across various
domains. In this research paper, we introduce a novel optimization method
called Gaussian Crunching Search (GCS). Inspired by the behaviour of particles
in a Gaussian distribution, GCS aims to efficiently explore the solution space
and converge towards the global optimum. We present a comprehensive analysis of
GCS, including its working mechanism, and potential applications. Through
experimental evaluations and comparisons with existing optimization methods, we
highlight the advantages and strengths of GCS. This research paper serves as a
valuable resource for researchers, practitioners, and students interested in
optimization, providing insights into the development and potential of Gaussian
Crunching Search as a new and promising approach.
Related papers
- Optimistic ε-Greedy Exploration for Cooperative Multi-Agent Reinforcement Learning [16.049852176246038]
We propose Optimistic $epsilon$-Greedy Exploration, focusing on enhancing exploration to correct value estimations.
We introduce an optimistic updating network to identify optimal actions and sample actions from its distribution with a probability of $epsilon$ during exploration.
Experimental results in various environments reveal that the Optimistic $epsilon$-Greedy Exploration effectively prevents the algorithm from suboptimal solutions.
arXiv Detail & Related papers (2025-02-05T12:06:54Z) - Equation discovery framework EPDE: Towards a better equation discovery [50.79602839359522]
We enhance the EPDE algorithm -- an evolutionary optimization-based discovery framework.
Our approach generates terms using fundamental building blocks such as elementary functions and individual differentials.
We validate our algorithm's noise resilience and overall performance by comparing its results with those from the state-of-the-art equation discovery framework SINDy.
arXiv Detail & Related papers (2024-12-28T15:58:44Z) - Integrating Chaotic Evolutionary and Local Search Techniques in Decision Space for Enhanced Evolutionary Multi-Objective Optimization [1.8130068086063336]
This paper focuses on both Single-Objective Multi-Modal Optimization (SOMMOP) and Multi-Objective Optimization (MOO)
In SOMMOP, we integrate chaotic evolution with niching techniques, as well as Persistence-Based Clustering combined with Gaussian mutation.
For MOO, we extend these methods into a comprehensive framework that incorporates Uncertainty-Based Selection, Adaptive Tuning, and introduces a radius ( R ) concept in deterministic crowding.
arXiv Detail & Related papers (2024-11-12T15:18:48Z) - Rapid optimization in high dimensional space by deep kernel learning augmented genetic algorithms [0.26716003713321473]
Deep Kernel Learning (DKL) efficiently navigates the spaces of preselected candidate structures but lacks generative capabilities.
This study introduces an approach that amalgamates the generative power of GAs to create new candidates with the efficiency of DKL-based surrogate models.
We demonstrate the effectiveness of this approach through the optimization of the FerroSIM model, showcasing its broad applicability to diverse challenges.
arXiv Detail & Related papers (2024-10-04T06:18:17Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - Enhancing Optimization Performance: A Novel Hybridization of Gaussian
Crunching Search and Powell's Method for Derivative-Free Optimization [0.0]
We present a novel approach to enhance optimization performance through the hybridization of Gaussian Crunching Search (GCS) and Powell's Method for derivative-free optimization.
This hybrid approach opens up new possibilities for optimizing complex systems and finding optimal solutions in a range of applications.
arXiv Detail & Related papers (2023-08-09T01:27:04Z) - Extrinsic Bayesian Optimizations on Manifolds [1.3477333339913569]
We propose an extrinsic Bayesian optimization (eBO) framework for general optimization problems on Euclid manifold.
Our approach is to employ extrinsic Gaussian processes by first embedding the manifold onto some higher dimensionalean space.
This leads to efficient and scalable algorithms for optimization over complex manifold.
arXiv Detail & Related papers (2022-12-21T06:10:12Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Harnessing Heterogeneity: Learning from Decomposed Feedback in Bayesian
Modeling [68.69431580852535]
We introduce a novel GP regression to incorporate the subgroup feedback.
Our modified regression has provably lower variance -- and thus a more accurate posterior -- compared to previous approaches.
We execute our algorithm on two disparate social problems.
arXiv Detail & Related papers (2021-07-07T03:57:22Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.