Differential Evolution with Reversible Linear Transformations
- URL: http://arxiv.org/abs/2002.02869v1
- Date: Fri, 7 Feb 2020 16:05:54 GMT
- Title: Differential Evolution with Reversible Linear Transformations
- Authors: Jakub M. Tomczak and Ewelina Weglarz-Tomczak and Agoston E. Eiben
- Abstract summary: We propose to generate new candidate solutions by utilizing reversible linear transformation applied to a triplet of solutions from the population.
In other words, the population is enlarged by using newly generated individuals without evaluating their fitness.
- Score: 8.873449722727026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differential evolution (DE) is a well-known type of evolutionary algorithms
(EA). Similarly to other EA variants it can suffer from small populations and
loose diversity too quickly. This paper presents a new approach to mitigate
this issue: We propose to generate new candidate solutions by utilizing
reversible linear transformation applied to a triplet of solutions from the
population. In other words, the population is enlarged by using newly generated
individuals without evaluating their fitness. We assess our methods on three
problems: (i) benchmark function optimization, (ii) discovering parameter
values of the gene repressilator system, (iii) learning neural networks. The
empirical results indicate that the proposed approach outperforms vanilla DE
and a version of DE with applying differential mutation three times on all
testbeds.
Related papers
- ODE Discovery for Longitudinal Heterogeneous Treatment Effects Inference [69.24516189971929]
In this paper, we introduce a new type of solution in the longitudinal setting: a closed-form ordinary differential equation (ODE)
While we still rely on continuous optimization to learn an ODE, the resulting inference machine is no longer a neural network.
arXiv Detail & Related papers (2024-03-16T02:07:45Z) - Benchmarking Differential Evolution on a Quantum Simulator [0.0]
Differential Evolution (DE) can be used to compute the minima of functions such as the rastrigin function and rosenbrock function.
This work is an attempt to study the result of applying the DE method on these functions with candidate individuals generated on classical Turing modeled computation.
arXiv Detail & Related papers (2023-11-06T14:27:00Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Vector-Valued Least-Squares Regression under Output Regularity
Assumptions [73.99064151691597]
We propose and analyse a reduced-rank method for solving least-squares regression problems with infinite dimensional output.
We derive learning bounds for our method, and study under which setting statistical performance is improved in comparison to full-rank method.
arXiv Detail & Related papers (2022-11-16T15:07:00Z) - Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient
Methods [73.35353358543507]
Gradient Descent-Ascent (SGDA) is one of the most prominent algorithms for solving min-max optimization and variational inequalities problems (VIP)
In this paper, we propose a unified convergence analysis that covers a large variety of descent-ascent methods.
We develop several new variants of SGDA such as a new variance-reduced method (L-SVRGDA), new distributed methods with compression (QSGDA, DIANA-SGDA, VR-DIANA-SGDA), and a new method with coordinate randomization (SEGA-SGDA)
arXiv Detail & Related papers (2022-02-15T09:17:39Z) - Detecting Communities in Complex Networks using an Adaptive Genetic
Algorithm and node similarity-based encoding [4.0714739042536845]
We propose a new node similarity-based encoding method to represent a network partition as an individual named MST-based.
Using the proposed method, we combine similarity-based and modularity-optimization-based approaches to find the communities of complex networks.
arXiv Detail & Related papers (2022-01-24T09:06:40Z) - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need? [80.86819657126041]
We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
arXiv Detail & Related papers (2021-12-23T03:52:33Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Evolving Evolutionary Algorithms using Linear Genetic Programming [0.0]
The model is based on the Linear Genetic Programming (LGP) technique.
Several Evolutionary Algorithms for function optimization, the Traveling Salesman Problem, and the Quadratic Assignment Problem are evolved by using the considered model.
arXiv Detail & Related papers (2021-08-21T19:15:07Z) - A Multi-objective Evolutionary Algorithm for EEG Inverse Problem [0.0]
We propose a multi-objective approach for the EEG Inverse Problem.
Due to the characteristics of the problem, this alternative included evolutionary strategies to resolve it.
The result is a Multi-objective Evolutionary Algorithm based on Anatomical Restrictions (MOEAAR) to estimate distributed solutions.
arXiv Detail & Related papers (2021-07-21T19:37:27Z) - Devolutionary genetic algorithms with application to the minimum
labeling Steiner tree problem [0.0]
This paper characterizes and discusses devolutionary genetic algorithms and evaluates their performances in solving the minimum labeling Steiner tree (MLST) problem.
We define devolutionary algorithms as the process of reaching a feasible solution by devolving a population of super-optimal unfeasible solutions over time.
We show how classical evolutionary concepts, such as crossing, mutation and fitness can be adapted to aim at reaching an optimal or close-to-optimal solution.
arXiv Detail & Related papers (2020-04-18T13:27:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.