Large Scale Many-Objective Optimization Driven by Distributional
Adversarial Networks
- URL: http://arxiv.org/abs/2003.07013v1
- Date: Mon, 16 Mar 2020 04:14:15 GMT
- Title: Large Scale Many-Objective Optimization Driven by Distributional
Adversarial Networks
- Authors: Zhenyu Liang, Yunfan Li, Zhongwei Wan
- Abstract summary: We will propose a novel algorithm based on RVEA framework and using Distributional Adversarial Networks (DAN) to generate new offspring.
The propose new algorithm will be tested on 9 benchmark problems in Large scale multi-objective problems (LSMOP)
- Score: 1.2461503242570644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimation of distribution algorithms (EDA) as one of the EAs is a stochastic
optimization problem which establishes a probability model to describe the
distribution of solutions and randomly samples the probability model to create
offspring and optimize model and population. Reference Vector Guided
Evolutionary (RVEA) based on the EDA framework, having a better performance to
solve MaOPs. Besides, using the generative adversarial networks to generate
offspring solutions is also a state-of-art thought in EAs instead of crossover
and mutation. In this paper, we will propose a novel algorithm based on RVEA[1]
framework and using Distributional Adversarial Networks (DAN) [2]to generate
new offspring. DAN uses a new distributional framework for adversarial training
of neural networks and operates on genuine samples rather than a single point
because the framework also leads to more stable training and extraordinarily
better mode coverage compared to single-point-sample methods. Thereby, DAN can
quickly generate offspring with high convergence regarding the same
distribution of data. In addition, we also use Large-Scale Multi-Objective
Optimization Based on A Competitive Swarm Optimizer (LMOCSO)[3] to adopts a new
two-stage strategy to update the position in order to significantly increase
the search efficiency to find optimal solutions in huge decision space. The
propose new algorithm will be tested on 9 benchmark problems in Large scale
multi-objective problems (LSMOP). To measure the performance, we will compare
our proposal algorithm with some state-of-art EAs e.g., RM-MEDA[4], MO-CMA[10]
and NSGA-II.
Related papers
- MARS: Unleashing the Power of Variance Reduction for Training Large Models [56.47014540413659]
Large gradient algorithms like Adam, Adam, and their variants have been central to the development of this type of training.
We propose a framework that reconciles preconditioned gradient optimization methods with variance reduction via a scaled momentum technique.
arXiv Detail & Related papers (2024-11-15T18:57:39Z) - Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.
In this study, we first explore the intrinsic characteristics of generative models.
We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - DiffSG: A Generative Solver for Network Optimization with Diffusion Model [75.27274046562806]
Diffusion generative models can consider a broader range of solutions and exhibit stronger generalization by learning parameters.
We propose a new framework, which leverages intrinsic distribution learning of diffusion generative models to learn high-quality solutions.
arXiv Detail & Related papers (2024-08-13T07:56:21Z) - Combining Kernelized Autoencoding and Centroid Prediction for Dynamic
Multi-objective Optimization [3.431120541553662]
This paper proposes a unified paradigm, which combines the kernelized autoncoding evolutionary search and the centriod-based prediction.
The proposed method is compared with five state-of-the-art algorithms on a number of complex benchmark problems.
arXiv Detail & Related papers (2023-12-02T00:24:22Z) - Learning to Solve Routing Problems via Distributionally Robust
Optimization [14.506553345693536]
Recent deep models for solving routing problems assume a single distribution of nodes for training, which severely impairs their cross-distribution generalization ability.
We exploit group distributionally robust optimization (group DRO) to tackle this issue, where we jointly optimize the weights for different groups of distributions and the parameters for the deep model in an interleaved manner during training.
We also design a module based on convolutional neural network, which allows the deep model to learn more informative latent pattern among the nodes.
arXiv Detail & Related papers (2022-02-15T08:06:44Z) - RoMA: Robust Model Adaptation for Offline Model-based Optimization [115.02677045518692]
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
A popular approach to solving this problem is maintaining a proxy model that approximates the true objective function.
Here, the main challenge is how to avoid adversarially optimized inputs during the search.
arXiv Detail & Related papers (2021-10-27T05:37:12Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Many-Objective Estimation of Distribution Optimization Algorithm Based
on WGAN-GP [1.2461503242570644]
EDA can better solve multi-objective optimal problems (MOPs)
We generate the new population by Wasserstein Generative Adversarial Networks-Gradient Penalty (WGAN-GP)
arXiv Detail & Related papers (2020-03-16T03:14:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.