An Improved LSHADE-RSP Algorithm with the Cauchy Perturbation:
iLSHADE-RSP
- URL: http://arxiv.org/abs/2006.02591v1
- Date: Thu, 4 Jun 2020 00:03:34 GMT
- Title: An Improved LSHADE-RSP Algorithm with the Cauchy Perturbation:
iLSHADE-RSP
- Authors: Tae Jong Choi and Chang Wook Ahn
- Abstract summary: The technique can increase the exploration by adopting the long-tailed property of the Cauchy distribution.
Compared to the previous approaches, the proposed approach perturbs a target vector instead of a mutant vector based on a jumping rate.
A set of 30 different and difficult optimization problems is used to evaluate the optimization performance of the improved LSHADE-RSP.
- Score: 9.777183117452235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A new method for improving the optimization performance of a state-of-the-art
differential evolution (DE) variant is proposed in this paper. The technique
can increase the exploration by adopting the long-tailed property of the Cauchy
distribution, which helps the algorithm to generate a trial vector with great
diversity. Compared to the previous approaches, the proposed approach perturbs
a target vector instead of a mutant vector based on a jumping rate. We applied
the proposed approach to LSHADE-RSP ranked second place in the CEC 2018
competition on single objective real-valued optimization. A set of 30 different
and difficult optimization problems is used to evaluate the optimization
performance of the improved LSHADE-RSP. Our experimental results verify that
the improved LSHADE-RSP significantly outperformed not only its predecessor
LSHADE-RSP but also several cutting-edge DE variants in terms of convergence
speed and solution accuracy.
Related papers
- A Multi-operator Ensemble LSHADE with Restart and Local Search Mechanisms for Single-objective Optimization [0.0]
mLSHADE-RL is an enhanced version of LSHADE-cnEpSin, one of the winners of the CEC 2017 competition in single-objective optimization.
Three mutation strategies such as DE/current-to-pbest-weight/1 with archive, DE/current-to-pbest/1 without archive, and DE/current-to-ordpbest-weight/1 are integrated in the original LSHADE-cnEpSin.
LSHADE-cnEpSin is tested on 30 dimensions in the CEC 2024 competition on single objective bound constrained optimization.
arXiv Detail & Related papers (2024-09-24T11:49:08Z) - Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Advancements in Optimization: Adaptive Differential Evolution with
Diversification Strategy [0.0]
The study employs single-objective optimization in a two-dimensional space and runs ADEDS on each of the benchmark functions with multiple iterations.
ADEDS consistently outperforms standard DE for a variety of optimization challenges, including functions with numerous local optima, plate-shaped, valley-shaped, stretched-shaped, and noisy functions.
arXiv Detail & Related papers (2023-10-02T10:05:41Z) - Accelerating the Evolutionary Algorithms by Gaussian Process Regression
with $\epsilon$-greedy acquisition function [2.7716102039510564]
We propose a novel method to estimate the elite individual to accelerate the convergence of optimization.
Our proposal has a broad prospect to estimate the elite individual and accelerate the convergence of optimization.
arXiv Detail & Related papers (2022-10-13T07:56:47Z) - Reinforcement learning based parameters adaption method for particle
swarm optimization [0.0]
In this article, a reinforcement learning-based online parameters adaption method(RLAM) is developed to enhance PSO in convergence.
experiments on 28 CEC 2013 benchmark functions are carried out when comparing with other online adaption method and PSO variants.
The reported results show that the the proposed RLAM is efficient and effictive and that the the proposed RLPSO is more superior compared with several state-of-the-art PSO variants.
arXiv Detail & Related papers (2022-06-02T02:16:15Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - Optimistic Reinforcement Learning by Forward Kullback-Leibler Divergence
Optimization [1.7970523486905976]
This paper addresses a new interpretation of reinforcement learning (RL) as reverse Kullback-Leibler (KL) divergence optimization.
It derives a new optimization method using forward KL divergence.
In a realistic robotic simulation, the proposed method with the moderate optimism outperformed one of the state-of-the-art RL method.
arXiv Detail & Related papers (2021-05-27T08:24:51Z) - Learning Sampling Policy for Faster Derivative Free Optimization [100.27518340593284]
We propose a new reinforcement learning based ZO algorithm (ZO-RL) with learning the sampling policy for generating the perturbations in ZO optimization instead of using random sampling.
Our results show that our ZO-RL algorithm can effectively reduce the variances of ZO gradient by learning a sampling policy, and converge faster than existing ZO algorithms in different scenarios.
arXiv Detail & Related papers (2021-04-09T14:50:59Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - Distributionally Robust Bayesian Optimization [121.71766171427433]
We present a novel distributionally robust Bayesian optimization algorithm (DRBO) for zeroth-order, noisy optimization.
Our algorithm provably obtains sub-linear robust regret in various settings.
We demonstrate the robust performance of our method on both synthetic and real-world benchmarks.
arXiv Detail & Related papers (2020-02-20T22:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.