Granular-ball Optimization Algorithm
- URL: http://arxiv.org/abs/2303.12807v1
- Date: Sat, 18 Mar 2023 13:18:21 GMT
- Title: Granular-ball Optimization Algorithm
- Authors: Shuyin Xia, Jiancu Chen, Bin Hou, Guoyin Wang
- Abstract summary: Fine multi-granularity data description ability results in a higher global search capability and faster convergence speed.
In comparison with the most popular and state-of-the-art algorithms, the experiments on twenty benchmark functions demonstrate its better performance.
- Score: 6.058433576739089
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The existing intelligent optimization algorithms are designed based on the
finest granularity, i.e., a point. This leads to weak global search ability and
inefficiency. To address this problem, we proposed a novel multi-granularity
optimization algorithm, namely granular-ball optimization algorithm (GBO), by
introducing granular-ball computing. GBO uses many granular-balls to cover the
solution space. Quite a lot of small and fine-grained granular-balls are used
to depict the important parts, and a little number of large and coarse-grained
granular-balls are used to depict the inessential parts. Fine multi-granularity
data description ability results in a higher global search capability and
faster convergence speed. In comparison with the most popular and
state-of-the-art algorithms, the experiments on twenty benchmark functions
demonstrate its better performance. The faster speed, higher approximation
ability of optimal solution, no hyper-parameters, and simpler design of GBO
make it an all-around replacement of most of the existing popular intelligent
optimization algorithms.
Related papers
- Equitable and Fair Performance Evaluation of Whale Optimization
Algorithm [4.0814527055582746]
It is essential that all algorithms are exhaustively, somewhat, and intelligently evaluated.
evaluating the effectiveness of optimization algorithms equitably and fairly is not an easy process for various reasons.
arXiv Detail & Related papers (2023-09-04T06:32:02Z) - HomOpt: A Homotopy-Based Hyperparameter Optimization Method [10.11271414863925]
We propose HomOpt, a data-driven approach based on a generalized additive model (GAM) surrogate combined with homotopy optimization.
We show how HomOpt can boost the performance and effectiveness of any given method with faster convergence to the optimum on continuous discrete, and categorical domain spaces.
arXiv Detail & Related papers (2023-08-07T06:01:50Z) - Improving Performance Insensitivity of Large-scale Multiobjective
Optimization via Monte Carlo Tree Search [7.34812867861951]
We propose an evolutionary algorithm for solving large-scale multiobjective optimization problems based on Monte Carlo tree search.
The proposed method samples the decision variables to construct new nodes on the Monte Carlo tree for optimization and evaluation.
It selects nodes with good evaluation for further search to reduce the performance sensitivity caused by large-scale decision variables.
arXiv Detail & Related papers (2023-04-08T17:15:49Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Recommender System Expedited Quantum Control Optimization [0.0]
Quantum control optimization algorithms are routinely used to generate optimal quantum gates or efficient quantum state transfers.
There are two main challenges in designing efficient optimization algorithms, namely overcoming the sensitivity to local optima and improving the computational speed.
Here, we propose and demonstrate the use of a machine learning method, specifically the recommender system (RS), to deal with the latter challenge.
arXiv Detail & Related papers (2022-01-29T10:25:41Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Sub-linear Regret Bounds for Bayesian Optimisation in Unknown Search
Spaces [63.22864716473051]
We propose a novel BO algorithm which expands (and shifts) the search space over iterations.
We show theoretically that for both our algorithms, the cumulative regret grows at sub-linear rates.
arXiv Detail & Related papers (2020-09-05T14:24:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.