Large Scale Many-Objective Optimization Driven by Distributional
Adversarial Networks
- URL: http://arxiv.org/abs/2003.07013v1
- Date: Mon, 16 Mar 2020 04:14:15 GMT
- Title: Large Scale Many-Objective Optimization Driven by Distributional
Adversarial Networks
- Authors: Zhenyu Liang, Yunfan Li, Zhongwei Wan
- Abstract summary: We will propose a novel algorithm based on RVEA framework and using Distributional Adversarial Networks (DAN) to generate new offspring.
The propose new algorithm will be tested on 9 benchmark problems in Large scale multi-objective problems (LSMOP)
- Score: 1.2461503242570644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimation of distribution algorithms (EDA) as one of the EAs is a stochastic
optimization problem which establishes a probability model to describe the
distribution of solutions and randomly samples the probability model to create
offspring and optimize model and population. Reference Vector Guided
Evolutionary (RVEA) based on the EDA framework, having a better performance to
solve MaOPs. Besides, using the generative adversarial networks to generate
offspring solutions is also a state-of-art thought in EAs instead of crossover
and mutation. In this paper, we will propose a novel algorithm based on RVEA[1]
framework and using Distributional Adversarial Networks (DAN) [2]to generate
new offspring. DAN uses a new distributional framework for adversarial training
of neural networks and operates on genuine samples rather than a single point
because the framework also leads to more stable training and extraordinarily
better mode coverage compared to single-point-sample methods. Thereby, DAN can
quickly generate offspring with high convergence regarding the same
distribution of data. In addition, we also use Large-Scale Multi-Objective
Optimization Based on A Competitive Swarm Optimizer (LMOCSO)[3] to adopts a new
two-stage strategy to update the position in order to significantly increase
the search efficiency to find optimal solutions in huge decision space. The
propose new algorithm will be tested on 9 benchmark problems in Large scale
multi-objective problems (LSMOP). To measure the performance, we will compare
our proposal algorithm with some state-of-art EAs e.g., RM-MEDA[4], MO-CMA[10]
and NSGA-II.
Related papers
- Combining Kernelized Autoencoding and Centroid Prediction for Dynamic
Multi-objective Optimization [3.431120541553662]
This paper proposes a unified paradigm, which combines the kernelized autoncoding evolutionary search and the centriod-based prediction.
The proposed method is compared with five state-of-the-art algorithms on a number of complex benchmark problems.
arXiv Detail & Related papers (2023-12-02T00:24:22Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Bidirectional Looking with A Novel Double Exponential Moving Average to
Adaptive and Non-adaptive Momentum Optimizers [109.52244418498974]
We propose a novel textscAdmeta (textbfADouble exponential textbfMov averagtextbfE textbfAdaptive and non-adaptive momentum) framework.
We provide two implementations, textscAdmetaR and textscAdmetaS, the former based on RAdam and the latter based on SGDM.
arXiv Detail & Related papers (2023-07-02T18:16:06Z) - Exploiting Temporal Structures of Cyclostationary Signals for
Data-Driven Single-Channel Source Separation [98.95383921866096]
We study the problem of single-channel source separation (SCSS)
We focus on cyclostationary signals, which are particularly suitable in a variety of application domains.
We propose a deep learning approach using a U-Net architecture, which is competitive with the minimum MSE estimator.
arXiv Detail & Related papers (2022-08-22T14:04:56Z) - Learning to Solve Routing Problems via Distributionally Robust
Optimization [14.506553345693536]
Recent deep models for solving routing problems assume a single distribution of nodes for training, which severely impairs their cross-distribution generalization ability.
We exploit group distributionally robust optimization (group DRO) to tackle this issue, where we jointly optimize the weights for different groups of distributions and the parameters for the deep model in an interleaved manner during training.
We also design a module based on convolutional neural network, which allows the deep model to learn more informative latent pattern among the nodes.
arXiv Detail & Related papers (2022-02-15T08:06:44Z) - RoMA: Robust Model Adaptation for Offline Model-based Optimization [115.02677045518692]
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
A popular approach to solving this problem is maintaining a proxy model that approximates the true objective function.
Here, the main challenge is how to avoid adversarially optimized inputs during the search.
arXiv Detail & Related papers (2021-10-27T05:37:12Z) - Multi-resource allocation for federated settings: A non-homogeneous
Markov chain model [2.552459629685159]
In a federated setting, agents coordinate with a central agent or a server to solve an optimization problem in which agents do not share their information with each other.
We describe how the basic additive-increase multiplicative-decrease (AIMD) algorithm can be modified in a straightforward manner to solve a class of optimization problems for federated settings for a single shared resource with no inter-agent communication.
We extend the single-resource algorithm to multiple heterogeneous shared resources that emerge in smart cities, sharing economy, and many other applications.
arXiv Detail & Related papers (2021-04-26T19:10:00Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - An Online Prediction Approach Based on Incremental Support Vector
Machine for Dynamic Multiobjective Optimization [19.336520152294213]
We propose a novel prediction algorithm based on incremental support vector machine (ISVM)
We treat the solving of dynamic multiobjective optimization problems (DMOPs) as an online learning process.
The proposed algorithm can effectively tackle dynamic multiobjective optimization problems.
arXiv Detail & Related papers (2021-02-24T08:51:23Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Many-Objective Estimation of Distribution Optimization Algorithm Based
on WGAN-GP [1.2461503242570644]
EDA can better solve multi-objective optimal problems (MOPs)
We generate the new population by Wasserstein Generative Adversarial Networks-Gradient Penalty (WGAN-GP)
arXiv Detail & Related papers (2020-03-16T03:14:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.