Incremental Recursive Ranking Grouping for Large Scale Global
Optimization
- URL: http://arxiv.org/abs/2206.04168v1
- Date: Wed, 8 Jun 2022 21:16:42 GMT
- Title: Incremental Recursive Ranking Grouping for Large Scale Global
Optimization
- Authors: Marcin Michal Komarnicki, Michal Witold Przewozniczek, Halina
Kwasnicka
- Abstract summary: In Large Scale Global Optimization (LSGO), problems are high-dimensional.
It was shown effective to decompose LSGO problems into subproblems and optimize them separately.
Many state-of-the-art decomposition strategies are derived from Differential Grouping (DG).
We propose Incremental Recursive Ranking Grouping (IRRG) that does not suffer from this flaw.
- Score: 2.8360662552057323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-world optimization problems may have a different underlying structure.
In black-box optimization, the dependencies between decision variables remain
unknown. However, some techniques can discover such interactions accurately. In
Large Scale Global Optimization (LSGO), problems are high-dimensional. It was
shown effective to decompose LSGO problems into subproblems and optimize them
separately. The effectiveness of such approaches may be highly dependent on the
accuracy of problem decomposition. Many state-of-the-art decomposition
strategies are derived from Differential Grouping (DG). However, if a given
problem consists of non-additively separable subproblems, their ability to
detect only true interactions might decrease significantly. Therefore, we
propose Incremental Recursive Ranking Grouping (IRRG) that does not suffer from
this flaw. IRRG consumes more fitness function evaluations than the recent
DG-based propositions, e.g., Recursive DG 3 (RDG3). Nevertheless, the
effectiveness of the considered Cooperative Co-evolution frameworks after
embedding IRRG or RDG3 was similar for problems with additively separable
subproblems that are suitable for RDG3. However, after replacing the additive
separability with non-additive, embedding IRRG leads to results of
significantly higher quality.
Related papers
- DRAGON: LLM-Driven Decomposition and Reconstruction Agents for Large-Scale Combinatorial Optimization [40.88623618289683]
Large Language Models (LLMs) have recently shown promise in addressing optimization problems (COPs) through prompt-based strategies.<n>We propose DRAGON, which combines the strengths of metaheuristic design and LLM reasoning.<n>By continuously interacting with the optimization environment and leveraging an adaptive experience memory, the agents iteratively learn from feedback.
arXiv Detail & Related papers (2026-01-10T09:31:40Z) - Efficient Differentiable Causal Discovery via Reliable Super-Structure Learning [51.20606796019663]
We propose ALVGL, a novel and general enhancement to the differentiable causal discovery pipeline.<n>ALVGL employs a sparse and low-rank decomposition to learn the precision matrix of the data.<n>We show that ALVGL not only achieves state-of-the-art accuracy but also significantly improves optimization efficiency.
arXiv Detail & Related papers (2026-01-09T02:18:59Z) - Risk Comparisons in Linear Regression: Implicit Regularization Dominates Explicit Regularization [96.97196425604893]
Existing theory suggests that for linear regression problems categorized by capacity and source conditions, gradient descent (GD) is always minimax optimal.<n>This work provides instance-wise comparisons of the finite-sample risks for these algorithms on any well-specified linear regression problem.
arXiv Detail & Related papers (2025-09-21T22:02:38Z) - The Pitfalls and Potentials of Adding Gene-invariance to Optimal Mixing [0.0]
Optimal Mixing (OM) is a variation operator that integrates local search with genetic recombination.<n>We propose a solution inspired by the Gene Invariant Genetic Algorithm (GIGA), which preserves gene frequencies in the population throughout the process.<n>This technique is tailored to and integrated with the Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA), resulting in GI-GOMEA.
arXiv Detail & Related papers (2025-06-18T08:06:44Z) - A Novel Two-Phase Cooperative Co-evolution Framework for Large-Scale Global Optimization with Complex Overlapping [8.839347987566336]
We propose a novel two-phase cooperative co-evolution framework to address large-scale global optimization problems with complex overlapping.
An effective method for decomposing overlapping problems, grounded in their mathematical properties, is embedded within the framework.
Extensive experiments demonstrate that the algorithm instantiated within our framework significantly outperforms existing algorithms.
arXiv Detail & Related papers (2025-03-23T13:21:11Z) - A Greedy Strategy for Graph Cut [95.2841574410968]
We propose a greedy strategy to solve the problem of Graph Cut, called GGC.
It starts from the state where each data sample is regarded as a cluster and dynamically merges the two clusters.
GGC has a nearly linear computational complexity with respect to the number of samples.
arXiv Detail & Related papers (2024-12-28T05:49:42Z) - Enhancing GNNs Performance on Combinatorial Optimization by Recurrent Feature Update [0.09986418756990156]
We introduce a novel algorithm, denoted hereafter as QRF-GNN, leveraging the power of GNNs to efficiently solve Combinatorial optimization (CO) problems.
It relies on unsupervised learning by minimizing the loss function derived from QUBO relaxation.
Results of experiments show that QRF-GNN drastically surpasses existing learning-based approaches and is comparable to the state-of-the-art conventionals.
arXiv Detail & Related papers (2024-07-23T13:34:35Z) - An Enhanced Differential Grouping Method for Large-Scale Overlapping Problems [23.17606455044122]
We propose a two-stage enhanced grouping method for large-scale overlapping problems, called OEDG.
In the first stage, OEDG employs a grouping method based on the finite differences principle to identify all subcomponents and shared variables.
In the second stage, we propose two grouping refinement methods, called subcomponent union detection (SUD) and subcomponent detection (SD), to enhance and refine the grouping results.
arXiv Detail & Related papers (2024-04-16T12:38:03Z) - A Composite Decomposition Method for Large-Scale Global Optimization [30.040728803996256]
The efficiency and accuracy of the grouping stage significantly impact the performance of the optimization process.
This article proposes a composite separability grouping (CSG) method, seamlessly integrating the general separability grouping (GSG) method.
CSG achieves more accurate variable grouping with lower computational complexity compared to GSG and state-of-the-art DG series designs.
arXiv Detail & Related papers (2024-03-02T12:12:04Z) - Enhancing Hypergradients Estimation: A Study of Preconditioning and
Reparameterization [49.73341101297818]
Bilevel optimization aims to optimize an outer objective function that depends on the solution to an inner optimization problem.
The conventional method to compute the so-called hypergradient of the outer problem is to use the Implicit Function Theorem (IFT)
We study the error of the IFT method and analyze two strategies to reduce this error.
arXiv Detail & Related papers (2024-02-26T17:09:18Z) - AGRO: Adversarial Discovery of Error-prone groups for Robust
Optimization [109.91265884632239]
Group distributionally robust optimization (G-DRO) can minimize the worst-case loss over a set of pre-defined groups over training data.
We propose AGRO -- Adversarial Group discovery for Distributionally Robust Optimization.
AGRO results in 8% higher model performance on average on known worst-groups, compared to prior group discovery approaches.
arXiv Detail & Related papers (2022-12-02T00:57:03Z) - Non-Linear Coordination Graphs [22.29517436920317]
Coordination graphs (CGs) represent a higher-order decomposition by incorporating pairwise payoff functions.
We propose the first non-linear coordination graph by extending CG value decomposition beyond the linear case.
We find that our method can achieve superior performance on challenging multi-agent coordination tasks like MACO.
arXiv Detail & Related papers (2022-10-26T18:11:31Z) - Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network [111.67744771462873]
This paper proposes a novel explainable model for sparse Knowledge Graphs (KGs)
It combines high-order reasoning into a graph convolutional network, namely HoGRN.
It can not only improve the generalization ability to mitigate the information insufficiency issue but also provide interpretability.
arXiv Detail & Related papers (2022-07-14T10:16:56Z) - Effective Mutation Rate Adaptation through Group Elite Selection [50.88204196504888]
This paper introduces the Group Elite Selection of Mutation Rates (GESMR) algorithm.
GESMR co-evolves a population of solutions and a population of MRs, such that each MR is assigned to a group of solutions.
With the same number of function evaluations and with almost no overhead, GESMR converges faster and to better solutions than previous approaches.
arXiv Detail & Related papers (2022-04-11T01:08:26Z) - Understanding Overparameterization in Generative Adversarial Networks [56.57403335510056]
Generative Adversarial Networks (GANs) are used to train non- concave mini-max optimization problems.
A theory has shown the importance of the gradient descent (GD) to globally optimal solutions.
We show that in an overized GAN with a $1$-layer neural network generator and a linear discriminator, the GDA converges to a global saddle point of the underlying non- concave min-max problem.
arXiv Detail & Related papers (2021-04-12T16:23:37Z) - A Surrogate-Assisted Variable Grouping Algorithm for General Large Scale
Global Optimization Problems [6.458244750197639]
This study proposes a novel decomposition algorithm called surrogate-assisted variable grouping (SVG)
SVG first designs a general-separability-oriented detection criterion according to whether the optimum of a variable changes with other variables.
It seeks the optimum of a variable with the help of a surrogate model rather than the original expensive high-dimensional model.
arXiv Detail & Related papers (2021-01-19T02:57:44Z) - Detached Error Feedback for Distributed SGD with Random Sparsification [98.98236187442258]
Communication bottleneck has been a critical problem in large-scale deep learning.
We propose a new distributed error feedback (DEF) algorithm, which shows better convergence than error feedback for non-efficient distributed problems.
We also propose DEFA to accelerate the generalization of DEF, which shows better bounds than DEF.
arXiv Detail & Related papers (2020-04-11T03:50:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.