Searching for a practical evidence of the No Free Lunch theorems
- URL: http://arxiv.org/abs/2109.13738v1
- Date: Sat, 21 Aug 2021 19:28:48 GMT
- Title: Searching for a practical evidence of the No Free Lunch theorems
- Authors: Mihai Oltean
- Abstract summary: According to the No Free Lunch (NFL) theorems all black-box algorithms perform equally well when compared over the entire set of optimization problems.
We will evolve test functions for which a given algorithm A is better than another given algorithm B.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: According to the No Free Lunch (NFL) theorems all black-box algorithms
perform equally well when compared over the entire set of optimization
problems. An important problem related to NFL is finding a test problem for
which a given algorithm is better than another given algorithm. Of high
interest is finding a function for which Random Search is better than another
standard evolutionary algorithm. In this paper, we propose an evolutionary
approach for solving this problem: we will evolve test functions for which a
given algorithm A is better than another given algorithm B. Two ways for
representing the evolved functions are employed: as GP trees and as binary
strings. Several numerical experiments involving NFL-style Evolutionary
Algorithms for function optimization are performed. The results show the
effectiveness of the proposed approach. Several test functions for which Random
Search performs better than all other considered algorithms have been evolved.
Related papers
- Gradient-free optimization of highly smooth functions: improved analysis
and a new algorithm [87.22224691317766]
This work studies problems with zero-order noisy oracle information under the assumption that the objective function is highly smooth.
We consider two kinds of zero-order projected gradient descent algorithms.
arXiv Detail & Related papers (2023-06-03T17:05:13Z) - HARRIS: Hybrid Ranking and Regression Forests for Algorithm Selection [75.84584400866254]
We propose a new algorithm selector leveraging special forests, combining the strengths of both approaches while alleviating their weaknesses.
HARRIS' decisions are based on a forest model, whose trees are created based on optimized on a hybrid ranking and regression loss function.
arXiv Detail & Related papers (2022-10-31T14:06:11Z) - Choosing the Right Algorithm With Hints From Complexity Theory [16.33500498939925]
We show that the Metropolis algorithm is clearly the best of all algorithms regarded for reasonable problem sizes.
An artificial algorithm of this type having an $O(n log n)$ runtime leads to the result that the significance-based compact genetic algorithm (sig-cGA) can solve the DLB problem in time $O(n log n)$ with high probability.
arXiv Detail & Related papers (2021-09-14T11:12:32Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Performance Analysis of Meta-heuristic Algorithms for a Quadratic
Assignment Problem [6.555180412600522]
A quadratic assignment problem (QAP) is an optimization problem that belongs to the class of NP-hard ones.
Heuristics and meta-heuristics algorithm are prevalent solution methods for this problem.
This paper is one of comparative studies to apply different metaheuristic algorithms for solving the QAP.
arXiv Detail & Related papers (2020-07-29T15:02:07Z) - Benchmarking Meta-heuristic Optimization [0.0]
Many meta-heuristic algorithms are very efficient when solving nonlinear functions.
A meta-heuristic algorithm is a problem-independent technique that can be applied to a broad range of problems.
arXiv Detail & Related papers (2020-07-27T12:25:31Z) - First Steps Towards a Runtime Analysis When Starting With a Good
Solution [8.34061303235504]
In practical applications it may be possible to guess solutions that are better than random ones.
We show that different algorithms profit to a very different degree from a better initial solution.
This could suggest that evolutionary algorithms better exploiting good initial solutions are still to be found.
arXiv Detail & Related papers (2020-06-22T11:46:42Z) - Improved Binary Artificial Bee Colony Algorithm [0.0]
The Artificial Bee Colony (ABC) algorithm is an evolutionary optimization algorithm based on swarm intelligence.
We improve the ABC algorithm to solve binary optimization problems and call it the improved binary Artificial Bee Colony (ibinABC)
The proposed method consists of an update mechanism based on fitness values and processing different number of decision variables.
arXiv Detail & Related papers (2020-03-12T17:22:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.