Particle Swarm Optimization with Velocity Restriction and Evolutionary
Parameters Selection for Scheduling Problem
- URL: http://arxiv.org/abs/2006.10935v1
- Date: Fri, 19 Jun 2020 02:28:57 GMT
- Title: Particle Swarm Optimization with Velocity Restriction and Evolutionary
Parameters Selection for Scheduling Problem
- Authors: Pavel Matrenin, Viktor Sekaev
- Abstract summary: The article presents a study of the Particle Swarm optimization method for scheduling problem.
To improve the method's performance a restriction of particles' velocity and an evolutionary meta-optimization were realized.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The article presents a study of the Particle Swarm optimization method for
scheduling problem. To improve the method's performance a restriction of
particles' velocity and an evolutionary meta-optimization were realized. The
approach proposed uses the Genetic algorithms for selection of the parameters
of Particle Swarm optimization. Experiments were carried out on test tasks of
the job-shop scheduling problem. This research proves the applicability of the
approach and shows the importance of tuning the behavioral parameters of the
swarm intelligence methods to achieve a high performance.
Related papers
- An investigation on the use of Large Language Models for hyperparameter tuning in Evolutionary Algorithms [4.0998481751764]
We employ two open-source Large Language Models (LLMs) to analyze the optimization logs online.
We study our approach in the context of step-size adaptation for (1+1)-ES.
arXiv Detail & Related papers (2024-08-05T13:20:41Z) - Optimization of Discrete Parameters Using the Adaptive Gradient Method
and Directed Evolution [49.1574468325115]
The search for an optimal solution is carried out by a population of individuals.
Unadapted individuals die, and optimal ones interbreed, the result directed evolutionary dynamics.
arXiv Detail & Related papers (2024-01-12T15:45:56Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Socio-cognitive Optimization of Time-delay Control Problems using
Evolutionary Metaheuristics [89.24951036534168]
Metaheuristics are universal optimization algorithms which should be used for solving difficult problems, unsolvable by classic approaches.
In this paper we aim at constructing novel socio-cognitive metaheuristic based on castes, and apply several versions of this algorithm to optimization of time-delay system model.
arXiv Detail & Related papers (2022-10-23T22:21:10Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Learning adaptive differential evolution algorithm from optimization
experiences by policy gradient [24.2122434523704]
This paper proposes a novel adaptive parameter control approach based on learning from the optimization experiences over a set of problems.
A reinforcement learning algorithm, named policy, is applied to learn an agent that can provide the control parameters of a proposed differential evolution adaptively.
The proposed algorithm performs competitively against nine well-known evolutionary algorithms on the CEC'13 and CEC'17 test suites.
arXiv Detail & Related papers (2021-02-06T12:01:20Z) - Particle Swarm Optimization: Fundamental Study and its Application to
Optimization and to Jetty Scheduling Problems [0.0]
The advantages of evolutionary algorithms with respect to traditional methods have been greatly discussed in the literature.
While particle swarms share such advantages, they outperform evolutionary algorithms in that they require lower computational cost and easier implementation.
This paper does not intend to study their tuning, general-purpose settings are taken from previous studies, and virtually the same algorithm is used to optimize a variety of notably different problems.
arXiv Detail & Related papers (2021-01-25T02:06:30Z) - Hyper-parameter estimation method with particle swarm optimization [0.8883733362171032]
The PSO method cannot be directly used in the problem of hyper- parameters estimation.
The proposed method uses the swarm method to optimize the performance of the acquisition function.
The results on several problems are improved.
arXiv Detail & Related papers (2020-11-24T07:51:51Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z) - On Hyper-parameter Tuning for Stochastic Optimization Algorithms [28.88646928299302]
This paper proposes the first-ever algorithmic framework for tuning hyper-parameters of optimization algorithm based on reinforcement learning.
The proposed framework can be used as a standard tool for hyper-parameter tuning in algorithms.
arXiv Detail & Related papers (2020-03-04T12:29:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.