Many-Objective Estimation of Distribution Optimization Algorithm Based
on WGAN-GP
- URL: http://arxiv.org/abs/2003.08295v1
- Date: Mon, 16 Mar 2020 03:14:59 GMT
- Title: Many-Objective Estimation of Distribution Optimization Algorithm Based
on WGAN-GP
- Authors: Zhenyu Liang, Yunfan Li, Zhongwei Wan
- Abstract summary: EDA can better solve multi-objective optimal problems (MOPs)
We generate the new population by Wasserstein Generative Adversarial Networks-Gradient Penalty (WGAN-GP)
- Score: 1.2461503242570644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimation of distribution algorithms (EDA) are stochastic optimization
algorithms. EDA establishes a probability model to describe the distribution of
solution from the perspective of population macroscopically by statistical
learning method, and then randomly samples the probability model to generate a
new population. EDA can better solve multi-objective optimal problems (MOPs).
However, the performance of EDA decreases in solving many-objective optimal
problems (MaOPs), which contains more than three objectives. Reference Vector
Guided Evolutionary Algorithm (RVEA), based on the EDA framework, can better
solve MaOPs. In our paper, we use the framework of RVEA. However, we generate
the new population by Wasserstein Generative Adversarial Networks-Gradient
Penalty (WGAN-GP) instead of using crossover and mutation. WGAN-GP have
advantages of fast convergence, good stability and high sample quality. WGAN-GP
learn the mapping relationship from standard normal distribution to given data
set distribution based on a given data set subject to the same distribution. It
can quickly generate populations with high diversity and good convergence. To
measure the performance, RM-MEDA, MOPSO and NSGA-II are selected to perform
comparison experiments over DTLZ and LSMOP test suites with 3-, 5-, 8-, 10- and
15-objective.
Related papers
- Stability and Generalization for Distributed SGDA [70.97400503482353]
We propose the stability-based generalization analytical framework for Distributed-SGDA.
We conduct a comprehensive analysis of stability error, generalization gap, and population risk across different metrics.
Our theoretical results reveal the trade-off between the generalization gap and optimization error.
arXiv Detail & Related papers (2024-11-14T11:16:32Z) - Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.
In this study, we first explore the intrinsic characteristics of generative models.
We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - DiffSG: A Generative Solver for Network Optimization with Diffusion Model [75.27274046562806]
Diffusion generative models can consider a broader range of solutions and exhibit stronger generalization by learning parameters.
We propose a new framework, which leverages intrinsic distribution learning of diffusion generative models to learn high-quality solutions.
arXiv Detail & Related papers (2024-08-13T07:56:21Z) - Domain Invariant Learning for Gaussian Processes and Bayesian
Exploration [39.83530605880014]
We propose a domain invariant learning algorithm for Gaussian processes (DIL-GP) with a min-max optimization on the likelihood.
Numerical experiments demonstrate the superiority of DIL-GP for predictions on several synthetic and real-world datasets.
arXiv Detail & Related papers (2023-12-18T16:13:34Z) - Bivariate Estimation-of-Distribution Algorithms Can Find an Exponential
Number of Optima [12.009357100208353]
We propose the test function EqualBlocksOneMax (EBOM) to support the study of how optimization algorithms handle large sets of optima.
We show that EBOM behaves very similarly to a theoretically ideal model for EBOM, which samples each of the exponentially many optima with the same maximal probability.
arXiv Detail & Related papers (2023-10-06T06:32:07Z) - Generalizing Gaussian Smoothing for Random Search [23.381986209234164]
Gaussian smoothing (GS) is a derivative-free optimization algorithm that estimates the gradient of an objective using perturbations of the current benchmarks.
We propose to choose a distribution for perturbations that minimizes the error of such distributions with provably smaller MSE.
arXiv Detail & Related papers (2022-11-27T04:42:05Z) - Towards Optimization and Model Selection for Domain Generalization: A
Mixup-guided Solution [43.292274574847234]
We propose Mixup guided optimization and selection techniques for domain generalization.
For optimization, we utilize an out-of-distribution dataset that can guide the preference direction.
For model selection, we generate a validation dataset with a closer distance to the target distribution.
arXiv Detail & Related papers (2022-09-01T02:18:00Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - Large Scale Many-Objective Optimization Driven by Distributional
Adversarial Networks [1.2461503242570644]
We will propose a novel algorithm based on RVEA framework and using Distributional Adversarial Networks (DAN) to generate new offspring.
The propose new algorithm will be tested on 9 benchmark problems in Large scale multi-objective problems (LSMOP)
arXiv Detail & Related papers (2020-03-16T04:14:15Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.