Many-Objective Estimation of Distribution Optimization Algorithm Based
on WGAN-GP
- URL: http://arxiv.org/abs/2003.08295v1
- Date: Mon, 16 Mar 2020 03:14:59 GMT
- Title: Many-Objective Estimation of Distribution Optimization Algorithm Based
on WGAN-GP
- Authors: Zhenyu Liang, Yunfan Li, Zhongwei Wan
- Abstract summary: EDA can better solve multi-objective optimal problems (MOPs)
We generate the new population by Wasserstein Generative Adversarial Networks-Gradient Penalty (WGAN-GP)
- Score: 1.2461503242570644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimation of distribution algorithms (EDA) are stochastic optimization
algorithms. EDA establishes a probability model to describe the distribution of
solution from the perspective of population macroscopically by statistical
learning method, and then randomly samples the probability model to generate a
new population. EDA can better solve multi-objective optimal problems (MOPs).
However, the performance of EDA decreases in solving many-objective optimal
problems (MaOPs), which contains more than three objectives. Reference Vector
Guided Evolutionary Algorithm (RVEA), based on the EDA framework, can better
solve MaOPs. In our paper, we use the framework of RVEA. However, we generate
the new population by Wasserstein Generative Adversarial Networks-Gradient
Penalty (WGAN-GP) instead of using crossover and mutation. WGAN-GP have
advantages of fast convergence, good stability and high sample quality. WGAN-GP
learn the mapping relationship from standard normal distribution to given data
set distribution based on a given data set subject to the same distribution. It
can quickly generate populations with high diversity and good convergence. To
measure the performance, RM-MEDA, MOPSO and NSGA-II are selected to perform
comparison experiments over DTLZ and LSMOP test suites with 3-, 5-, 8-, 10- and
15-objective.
Related papers
- Domain Invariant Learning for Gaussian Processes and Bayesian
Exploration [39.83530605880014]
We propose a domain invariant learning algorithm for Gaussian processes (DIL-GP) with a min-max optimization on the likelihood.
Numerical experiments demonstrate the superiority of DIL-GP for predictions on several synthetic and real-world datasets.
arXiv Detail & Related papers (2023-12-18T16:13:34Z) - Bivariate Estimation-of-Distribution Algorithms Can Find an Exponential
Number of Optima [12.009357100208353]
We propose the test function EqualBlocksOneMax (EBOM) to support the study of how optimization algorithms handle large sets of optima.
We show that EBOM behaves very similarly to a theoretically ideal model for EBOM, which samples each of the exponentially many optima with the same maximal probability.
arXiv Detail & Related papers (2023-10-06T06:32:07Z) - Generalizing Gaussian Smoothing for Random Search [23.381986209234164]
Gaussian smoothing (GS) is a derivative-free optimization algorithm that estimates the gradient of an objective using perturbations of the current benchmarks.
We propose to choose a distribution for perturbations that minimizes the error of such distributions with provably smaller MSE.
arXiv Detail & Related papers (2022-11-27T04:42:05Z) - Stacking Ensemble Learning in Deep Domain Adaptation for Ophthalmic
Image Classification [61.656149405657246]
Domain adaptation is effective in image classification tasks where obtaining sufficient label data is challenging.
We propose a novel method, named SELDA, for stacking ensemble learning via extending three domain adaptation methods.
The experimental results using Age-Related Eye Disease Study (AREDS) benchmark ophthalmic dataset demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2022-09-27T14:19:00Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Towards Optimization and Model Selection for Domain Generalization: A
Mixup-guided Solution [43.292274574847234]
We propose Mixup guided optimization and selection techniques for domain generalization.
For optimization, we utilize an out-of-distribution dataset that can guide the preference direction.
For model selection, we generate a validation dataset with a closer distance to the target distribution.
arXiv Detail & Related papers (2022-09-01T02:18:00Z) - Harnessing Heterogeneity: Learning from Decomposed Feedback in Bayesian
Modeling [68.69431580852535]
We introduce a novel GP regression to incorporate the subgroup feedback.
Our modified regression has provably lower variance -- and thus a more accurate posterior -- compared to previous approaches.
We execute our algorithm on two disparate social problems.
arXiv Detail & Related papers (2021-07-07T03:57:22Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - Large Scale Many-Objective Optimization Driven by Distributional
Adversarial Networks [1.2461503242570644]
We will propose a novel algorithm based on RVEA framework and using Distributional Adversarial Networks (DAN) to generate new offspring.
The propose new algorithm will be tested on 9 benchmark problems in Large scale multi-objective problems (LSMOP)
arXiv Detail & Related papers (2020-03-16T04:14:15Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.