GBO:AMulti-Granularity Optimization Algorithm via Granular-ball for Continuous Problems
- URL: http://arxiv.org/abs/2303.12807v2
- Date: Tue, 18 Feb 2025 10:07:23 GMT
- Title: GBO:AMulti-Granularity Optimization Algorithm via Granular-ball for Continuous Problems
- Authors: Shuyin Xia, Xinyu Lin, Guan Wang, De-Gang Chen, Sen Zhao, Guoyin Wang, Jing Liang,
- Abstract summary: This paper proposes a new multi-granularity evolutionary optimization method, namely the Granular-ball Optimization (GBO) algorithm.<n>Using granular-balls instead of traditional points for optimization increases the diversity and robustness of the random search process.<n>Experiments on commonly used benchmarks have shown that GBO outperforms popular and advanced evolutionary algorithms.
- Score: 18.229944332377755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimization problems aim to find the optimal solution, which is becoming increasingly complex and difficult to solve. Traditional evolutionary optimization methods always overlook the granular characteristics of solution space. In the real scenario of numerous optimizations, the solution space is typically partitioned into sub-regions characterized by varying degree distributions. These sub-regions present different granularity characteristics at search potential and difficulty. Considering the granular characteristics of the solution space, the number of coarse-grained regions is smaller than the number of points, so the calculation is more efficient. On the other hand, coarse-grained characteristics are not easily affected by fine-grained sample points, so the calculation is more robust. To this end, this paper proposes a new multi-granularity evolutionary optimization method, namely the Granular-ball Optimization (GBO) algorithm, which characterizes and searches the solution space from coarse to fine. Specifically, using granular-balls instead of traditional points for optimization increases the diversity and robustness of the random search process. At the same time, the search range in different iteration processes is limited by the radius of granular-balls, covering the solution space from large to small. The mechanism of granular-ball splitting is applied to continuously split and evolve the large granular-balls into smaller ones for refining the solution space. Extensive experiments on commonly used benchmarks have shown that GBO outperforms popular and advanced evolutionary algorithms. The code can be found in the supporting materials.
Related papers
- GBFRS: Robust Fuzzy Rough Sets via Granular-ball Computing [48.33779268699777]
Fuzzy rough set theory is effective for processing datasets with complex attributes.
Most existing models operate at the finest granularity, rendering them inefficient and sensitive to noise.
This paper proposes integrating multi-granularity granular-ball computing into fuzzy rough set theory, using granular-balls to replace sample points.
arXiv Detail & Related papers (2025-01-30T15:09:26Z) - Scalable Bayesian Optimization via Focalized Sparse Gaussian Processes [8.40647440727154]
We argue that Bayesian optimization algorithms with sparse GPs can more efficiently allocate their representational power to relevant regions of the search space.
We show that FocalBO can efficiently leverage large amounts of offline and online data to achieve state-of-the-art performance on robot morphology design and to control a 585-dimensional musculoskeletal system.
arXiv Detail & Related papers (2024-12-29T06:36:15Z) - Equitable and Fair Performance Evaluation of Whale Optimization
Algorithm [4.0814527055582746]
It is essential that all algorithms are exhaustively, somewhat, and intelligently evaluated.
evaluating the effectiveness of optimization algorithms equitably and fairly is not an easy process for various reasons.
arXiv Detail & Related papers (2023-09-04T06:32:02Z) - HomOpt: A Homotopy-Based Hyperparameter Optimization Method [10.11271414863925]
We propose HomOpt, a data-driven approach based on a generalized additive model (GAM) surrogate combined with homotopy optimization.
We show how HomOpt can boost the performance and effectiveness of any given method with faster convergence to the optimum on continuous discrete, and categorical domain spaces.
arXiv Detail & Related papers (2023-08-07T06:01:50Z) - Accelerated Distributed Aggregative Optimization [5.5491171448159715]
We propose two novel algorithms named DAGT-HB and DAGT-NES for solving the distributed aggregative optimization problem.
We analyse that the DAGT-HB and DAGT-NES algorithms can converge to an optimal solution at a global $mathbfR-$linear convergence rate.
arXiv Detail & Related papers (2023-04-17T08:11:01Z) - Improving Performance Insensitivity of Large-scale Multiobjective
Optimization via Monte Carlo Tree Search [7.34812867861951]
We propose an evolutionary algorithm for solving large-scale multiobjective optimization problems based on Monte Carlo tree search.
The proposed method samples the decision variables to construct new nodes on the Monte Carlo tree for optimization and evaluation.
It selects nodes with good evaluation for further search to reduce the performance sensitivity caused by large-scale decision variables.
arXiv Detail & Related papers (2023-04-08T17:15:49Z) - Research on Efficient Fuzzy Clustering Method Based on Local Fuzzy
Granular balls [67.33923111887933]
In this paper, the data is fuzzy iterated using granular-balls, and the membership degree of data only considers the two granular-balls where it is located.
The formed fuzzy granular-balls set can use more processing methods in the face of different data scenarios.
arXiv Detail & Related papers (2023-03-07T01:52:55Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - An Improved Greedy Algorithm for Subset Selection in Linear Estimation [5.994412766684842]
We consider a subset selection problem in a spatial field where we seek to find a set of k locations whose observations provide the best estimate of the field value at a finite set of prediction locations.
One approach for observation selection is to perform a grid discretization of the space and obtain an approximate solution using the greedy algorithm.
We propose a method to reduce the computational complexity by considering a search space consisting only of prediction locations and centroids of cliques formed by the prediction locations.
arXiv Detail & Related papers (2022-03-30T05:52:16Z) - Recommender System Expedited Quantum Control Optimization [0.0]
Quantum control optimization algorithms are routinely used to generate optimal quantum gates or efficient quantum state transfers.
There are two main challenges in designing efficient optimization algorithms, namely overcoming the sensitivity to local optima and improving the computational speed.
Here, we propose and demonstrate the use of a machine learning method, specifically the recommender system (RS), to deal with the latter challenge.
arXiv Detail & Related papers (2022-01-29T10:25:41Z) - Density-Based Clustering with Kernel Diffusion [59.4179549482505]
A naive density corresponding to the indicator function of a unit $d$-dimensional Euclidean ball is commonly used in density-based clustering algorithms.
We propose a new kernel diffusion density function, which is adaptive to data of varying local distributional characteristics and smoothness.
arXiv Detail & Related papers (2021-10-11T09:00:33Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Intermediate Layer Optimization for Inverse Problems using Deep
Generative Models [86.29330440222199]
ILO is a novel optimization algorithm for solving inverse problems with deep generative models.
We empirically show that our approach outperforms state-of-the-art methods introduced in StyleGAN-2 and PULSE for a wide range of inverse problems.
arXiv Detail & Related papers (2021-02-15T06:52:22Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Bayesian Variational Optimization for Combinatorial Spaces [0.0]
Broad applications include the study of molecules, proteins, DNA, device structures and quantum circuit designs.
A on optimization over categorical spaces is needed to find optimal or pareto-optimal solutions.
We introduce a variational Bayesian optimization method that combines variational optimization and continuous relaxations.
arXiv Detail & Related papers (2020-11-03T20:56:13Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - Sub-linear Regret Bounds for Bayesian Optimisation in Unknown Search
Spaces [63.22864716473051]
We propose a novel BO algorithm which expands (and shifts) the search space over iterations.
We show theoretically that for both our algorithms, the cumulative regret grows at sub-linear rates.
arXiv Detail & Related papers (2020-09-05T14:24:40Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.