Cooperative coevolutionary hybrid NSGA-II with Linkage Measurement
Minimization for Large-scale Multi-objective optimization
- URL: http://arxiv.org/abs/2208.13415v1
- Date: Mon, 29 Aug 2022 08:18:15 GMT
- Title: Cooperative coevolutionary hybrid NSGA-II with Linkage Measurement
Minimization for Large-scale Multi-objective optimization
- Authors: Rui Zhong and Masaharu Munetomo
- Abstract summary: We propose a variable grouping method based on cooperative coevolution for large-scale multi-objective problems (LSMOPs)
For the sub-problem optimization stage, a hybrid NSGA-II with a Gaussian sampling operator based on an estimated convergence point is proposed.
- Score: 3.274290296343038
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a variable grouping method based on cooperative
coevolution for large-scale multi-objective problems (LSMOPs), named Linkage
Measurement Minimization (LMM). And for the sub-problem optimization stage, a
hybrid NSGA-II with a Gaussian sampling operator based on an estimated
convergence point is proposed. In the variable grouping stage, according to our
previous research, we treat the variable grouping problem as a combinatorial
optimization problem, and the linkage measurement function is designed based on
linkage identification by the nonlinearity check on real code (LINC-R). We
extend this variable grouping method to LSMOPs. In the sub-problem optimization
stage, we hypothesize that there is a higher probability of existing better
solutions around the Pareto Front (PF). Based on this hypothesis, we estimate a
convergence point at every generation of optimization and perform Gaussian
sampling around the convergence point. The samples with good objective value
will participate in the optimization as elites. Numerical experiments show that
our variable grouping method is better than some popular variable grouping
methods, and hybrid NSGA-II has broad prospects for multi-objective problem
optimization.
Related papers
- Proximal Oracles for Optimization and Sampling [18.77973093341588]
We consider convex optimization with non-smooth objective function and log-concave sampling with non-smooth potential.
To overcome the challenges caused by non-smoothness, our algorithms employ two powerful proximal frameworks in optimization and sampling.
arXiv Detail & Related papers (2024-04-02T18:52:28Z) - Ant Colony Sampling with GFlowNets for Combinatorial Optimization [68.84985459701007]
Generative Flow Ant Colony Sampler (GFACS) is a novel meta-heuristic method that hierarchically combines amortized inference and parallel search.
Our method first leverages Generative Flow Networks (GFlowNets) to amortize a multi-modal prior distribution over a solution space.
arXiv Detail & Related papers (2024-03-11T16:26:06Z) - ALEXR: An Optimal Single-Loop Algorithm for Convex Finite-Sum Coupled Compositional Stochastic Optimization [53.14532968909759]
We introduce an efficient single-loop primal-dual block-coordinate algorithm, dubbed ALEXR.
We establish the convergence rates of ALEXR in both convex and strongly convex cases under smoothness and non-smoothness conditions.
We present lower complexity bounds to demonstrate that the convergence rates of ALEXR are optimal among first-order block-coordinate algorithms for the considered class of cFCCO problems.
arXiv Detail & Related papers (2023-12-04T19:00:07Z) - Combining Kernelized Autoencoding and Centroid Prediction for Dynamic
Multi-objective Optimization [3.431120541553662]
This paper proposes a unified paradigm, which combines the kernelized autoncoding evolutionary search and the centriod-based prediction.
The proposed method is compared with five state-of-the-art algorithms on a number of complex benchmark problems.
arXiv Detail & Related papers (2023-12-02T00:24:22Z) - Extrinsic Bayesian Optimizations on Manifolds [1.3477333339913569]
We propose an extrinsic Bayesian optimization (eBO) framework for general optimization problems on Euclid manifold.
Our approach is to employ extrinsic Gaussian processes by first embedding the manifold onto some higher dimensionalean space.
This leads to efficient and scalable algorithms for optimization over complex manifold.
arXiv Detail & Related papers (2022-12-21T06:10:12Z) - Late Fusion Multi-view Clustering via Global and Local Alignment
Maximization [61.89218392703043]
Multi-view clustering (MVC) optimally integrates complementary information from different views to improve clustering performance.
Most of existing approaches directly fuse multiple pre-specified similarities to learn an optimal similarity matrix for clustering.
We propose late fusion MVC via alignment to address these issues.
arXiv Detail & Related papers (2022-08-02T01:49:31Z) - Batched Data-Driven Evolutionary Multi-Objective Optimization Based on
Manifold Interpolation [6.560512252982714]
We propose a framework for implementing batched data-driven evolutionary multi-objective optimization.
It is so general that any off-the-shelf evolutionary multi-objective optimization algorithms can be applied in a plug-in manner.
Our proposed framework is featured with a faster convergence and a stronger resilience to various PF shapes.
arXiv Detail & Related papers (2021-09-12T23:54:26Z) - Harnessing Heterogeneity: Learning from Decomposed Feedback in Bayesian
Modeling [68.69431580852535]
We introduce a novel GP regression to incorporate the subgroup feedback.
Our modified regression has provably lower variance -- and thus a more accurate posterior -- compared to previous approaches.
We execute our algorithm on two disparate social problems.
arXiv Detail & Related papers (2021-07-07T03:57:22Z) - Hybrid Evolutionary Optimization Approach for Oilfield Well Control
Optimization [0.0]
Oilfield production optimization is challenging due to subsurface model complexity and associated non-linearity.
This paper presents efficacy of two hybrid evolutionary optimization approaches for well control optimization of a waterflooding operation.
arXiv Detail & Related papers (2021-03-29T13:36:51Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.