Evolutionary Variational Optimization of Generative Models
- URL: http://arxiv.org/abs/2012.12294v2
- Date: Fri, 16 Apr 2021 19:44:49 GMT
- Title: Evolutionary Variational Optimization of Generative Models
- Authors: Jakob Drefs, Enrico Guiraud, J\"org L\"ucke
- Abstract summary: We combine two popular optimization approaches to derive learning algorithms for generative models: variational optimization and evolutionary algorithms.
We show that evolutionary algorithms can effectively and efficiently optimize the variational bound.
In the category of "zero-shot" learning, we observed the evolutionary variational algorithm to significantly improve the state-of-the-art in many benchmark settings.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We combine two popular optimization approaches to derive learning algorithms
for generative models: variational optimization and evolutionary algorithms.
The combination is realized for generative models with discrete latents by
using truncated posteriors as the family of variational distributions. The
variational parameters of truncated posteriors are sets of latent states. By
interpreting these states as genomes of individuals and by using the
variational lower bound to define a fitness, we can apply evolutionary
algorithms to realize the variational loop. The used variational distributions
are very flexible and we show that evolutionary algorithms can effectively and
efficiently optimize the variational bound. Furthermore, the variational loop
is generally applicable ("black box") with no analytical derivations required.
To show general applicability, we apply the approach to three generative models
(we use noisy-OR Bayes Nets, Binary Sparse Coding, and Spike-and-Slab Sparse
Coding). To demonstrate effectiveness and efficiency of the novel variational
approach, we use the standard competitive benchmarks of image denoising and
inpainting. The benchmarks allow quantitative comparisons to a wide range of
methods including probabilistic approaches, deep deterministic and generative
networks, and non-local image processing methods. In the category of
"zero-shot" learning (when only the corrupted image is used for training), we
observed the evolutionary variational algorithm to significantly improve the
state-of-the-art in many benchmark settings. For one well-known inpainting
benchmark, we also observed state-of-the-art performance across all categories
of algorithms although we only train on the corrupted image. In general, our
investigations highlight the importance of research on optimization methods for
generative models to achieve performance improvements.
Related papers
- Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - Model Uncertainty in Evolutionary Optimization and Bayesian Optimization: A Comparative Analysis [5.6787965501364335]
Black-box optimization problems are common in many real-world applications.
These problems require optimization through input-output interactions without access to internal workings.
Two widely used gradient-free optimization techniques are employed to address such challenges.
This paper aims to elucidate the similarities and differences in the utilization of model uncertainty between these two methods.
arXiv Detail & Related papers (2024-03-21T13:59:19Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Distributed Evolution Strategies for Black-box Stochastic Optimization [42.90600124972943]
This work concerns the evolutionary approaches to distributed black-box optimization.
Each worker can individually solve an approximation of the problem with algorithms.
We propose two alternative simulation schemes which significantly improve robustness of problems.
arXiv Detail & Related papers (2022-04-09T11:18:41Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Polygonal Unadjusted Langevin Algorithms: Creating stable and efficient
adaptive algorithms for neural networks [0.0]
We present a new class of Langevin based algorithms, which overcomes many of the known shortcomings of popular adaptive vanishing algorithms.
In particular, we provide a nonasymptotic analysis and full theoretical guarantees for the convergence properties of an algorithm of this novel class, which we named TH$varepsilon$O POULA (or, simply, TheoPouLa)
arXiv Detail & Related papers (2021-05-28T15:58:48Z) - Optimizing the Parameters of A Physical Exercise Dose-Response Model: An
Algorithmic Comparison [1.0152838128195467]
The purpose of this research was to compare the robustness and performance of a local and global optimization algorithm when given the task of fitting the parameters of a common non-linear dose-response model utilized in the field of exercise physiology.
The results of our comparison over 1000 experimental runs demonstrate the superior performance of the evolutionary computation based algorithm to consistently achieve a stronger model fit and holdout performance in comparison to the local search algorithm.
arXiv Detail & Related papers (2020-12-16T22:06:35Z) - AdaLead: A simple and robust adaptive greedy search algorithm for
sequence design [55.41644538483948]
We develop an easy-to-directed, scalable, and robust evolutionary greedy algorithm (AdaLead)
AdaLead is a remarkably strong benchmark that out-competes more complex state of the art approaches in a variety of biologically motivated sequence design challenges.
arXiv Detail & Related papers (2020-10-05T16:40:38Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - Stochastic batch size for adaptive regularization in deep network
optimization [63.68104397173262]
We propose a first-order optimization algorithm incorporating adaptive regularization applicable to machine learning problems in deep learning framework.
We empirically demonstrate the effectiveness of our algorithm using an image classification task based on conventional network models applied to commonly used benchmark datasets.
arXiv Detail & Related papers (2020-04-14T07:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.