Towards Solving Large-scale Expensive Optimization Problems Efficiently
Using Coordinate Descent Algorithm
- URL: http://arxiv.org/abs/2003.03676v3
- Date: Fri, 11 Sep 2020 11:52:50 GMT
- Title: Towards Solving Large-scale Expensive Optimization Problems Efficiently
Using Coordinate Descent Algorithm
- Authors: Shahryar Rahnamayan and Seyed Jalaleddin Mousavirad
- Abstract summary: We propose a modified Coordinate Descent algorithm (MCD) to tackle LSEGO problems with a limited computational budget.
Our proposed algorithm benefits from two leading steps, namely, finding the region of interest and then shrinkage of the search space by folding it into the half with exponential speed.
The proposed algorithm is compared with cooperative co-evolution with delta grouping on 20 benchmark functions with dimension 1000.
- Score: 3.1600708674008393
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many real-world problems are categorized as large-scale problems, and
metaheuristic algorithms as an alternative method to solve large-scale problem;
they need the evaluation of many candidate solutions to tackle them prior to
their convergence, which is not affordable for practical applications since the
most of them are computationally expensive. In other words, these problems are
not only large-scale but also computationally expensive, that makes them very
difficult to solve. There is no efficient surrogate model to support
large-scale expensive global optimization (LSEGO) problems. As a result, the
algorithms should address LSEGO problems using a limited computational budget
to be applicable in real-world applications. Coordinate Descent (CD) algorithm
is an optimization strategy based on the decomposition of a n-dimensional
problem into n one-dimensional problem. To the best our knowledge, there is no
significant study to assess benchmark functions with various dimensions and
landscape properties to investigate CD algorithm. In this paper, we propose a
modified Coordinate Descent algorithm (MCD) to tackle LSEGO problems with a
limited computational budget. Our proposed algorithm benefits from two leading
steps, namely, finding the region of interest and then shrinkage of the search
space by folding it into the half with exponential speed. One of the main
advantages of the proposed algorithm is being free of any control parameters,
which makes it far from the intricacies of the tuning process. The proposed
algorithm is compared with cooperative co-evolution with delta grouping on 20
benchmark functions with dimension 1000. Also, we conducted some experiments on
CEC-2017, D=10, 30, 50, and 100, to investigate the behavior of MCD algorithm
in lower dimensions. The results show that MCD is beneficial not only in
large-scale problems, but also in low-scale optimization problems.
Related papers
- GreedyML: A Parallel Algorithm for Maximizing Submodular Functions [2.9998889086656586]
We describe a parallel approximation algorithm for maximizing monotone submodular functions on distributed memory multiprocessors.
Our work is motivated by the need to solve submodular optimization problems on massive data sets, for practical applications in areas such as data summarization, machine learning, and graph sparsification.
arXiv Detail & Related papers (2024-03-15T14:19:09Z) - Equitable and Fair Performance Evaluation of Whale Optimization
Algorithm [4.0814527055582746]
It is essential that all algorithms are exhaustively, somewhat, and intelligently evaluated.
evaluating the effectiveness of optimization algorithms equitably and fairly is not an easy process for various reasons.
arXiv Detail & Related papers (2023-09-04T06:32:02Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - A Simple Evolutionary Algorithm for Multi-modal Multi-objective
Optimization [0.0]
We introduce a steady-state evolutionary algorithm for solving multi-modal, multi-objective optimization problems (MMOPs)
We report its performance on 21 MMOPs from various test suites that are widely used for benchmarking using a low computational budget of 1000 function evaluations.
arXiv Detail & Related papers (2022-01-18T03:31:11Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Instance Specific Approximations for Submodular Maximization [45.91235224228292]
We look for methods to benchmark the performance of algorithms against optimal solutions on real-world instances.
A major question is how to measure the performance of an algorithm in comparison to an optimal solution on instances we encounter in practice.
Our main contribution is not a new algorithm for submodular minimization but an analytical method that measures how close an algorithm for submodular instance is to optimal.
arXiv Detail & Related papers (2021-02-23T19:39:32Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - A Two-Timescale Framework for Bilevel Optimization: Complexity Analysis
and Application to Actor-Critic [142.1492359556374]
Bilevel optimization is a class of problems which exhibit a two-level structure.
We propose a two-timescale approximation (TTSA) algorithm for tackling such a bilevel problem.
We show that a two-timescale natural actor-critic policy optimization algorithm can be viewed as a special case of our TTSA framework.
arXiv Detail & Related papers (2020-07-10T05:20:02Z) - Optimal and Practical Algorithms for Smooth and Strongly Convex
Decentralized Optimization [21.555331273873175]
We consider the task of decentralized minimization of the sum of smooth strongly convex functions stored across the nodes of a network.
We propose two new algorithms for this decentralized optimization problem and equip them with complexity guarantees.
arXiv Detail & Related papers (2020-06-21T11:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.