A Composite Decomposition Method for Large-Scale Global Optimization
- URL: http://arxiv.org/abs/2403.01192v2
- Date: Fri, 8 Mar 2024 15:18:19 GMT
- Title: A Composite Decomposition Method for Large-Scale Global Optimization
- Authors: Maojiang Tian, Minyang Chen, Wei Du, Yang Tang, Yaochu Jin, Gary G.
Yen
- Abstract summary: The efficiency and accuracy of the grouping stage significantly impact the performance of the optimization process.
This article proposes a composite separability grouping (CSG) method, seamlessly integrating the general separability grouping (GSG) method.
CSG achieves more accurate variable grouping with lower computational complexity compared to GSG and state-of-the-art DG series designs.
- Score: 30.040728803996256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cooperative co-evolution (CC) algorithms, based on the divide-and-conquer
strategy, have emerged as the predominant approach to solving large-scale
global optimization (LSGO) problems. The efficiency and accuracy of the
grouping stage significantly impact the performance of the optimization
process. While the general separability grouping (GSG) method has overcome the
limitation of previous differential grouping (DG) methods by enabling the
decomposition of non-additively separable functions, it suffers from high
computational complexity. To address this challenge, this article proposes a
composite separability grouping (CSG) method, seamlessly integrating DG and GSG
into a problem decomposition framework to utilize the strengths of both
approaches. CSG introduces a step-by-step decomposition framework that
accurately decomposes various problem types using fewer computational
resources. By sequentially identifying additively, multiplicatively and
generally separable variables, CSG progressively groups non-separable variables
by recursively considering the interactions between each non-separable variable
and the formed non-separable groups. Furthermore, to enhance the efficiency and
accuracy of CSG, we introduce two innovative methods: a multiplicatively
separable variable detection method and a non-separable variable grouping
method. These two methods are designed to effectively detect multiplicatively
separable variables and efficiently group non-separable variables,
respectively. Extensive experimental results demonstrate that CSG achieves more
accurate variable grouping with lower computational complexity compared to GSG
and state-of-the-art DG series designs.
Related papers
- NDCG-Consistent Softmax Approximation with Accelerated Convergence [67.10365329542365]
We propose novel loss formulations that align directly with ranking metrics.<n>We integrate the proposed RG losses with the highly efficient Alternating Least Squares (ALS) optimization method.<n> Empirical evaluations on real-world datasets demonstrate that our approach achieves comparable or superior ranking performance.
arXiv Detail & Related papers (2025-06-11T06:59:17Z) - Accelerating Spectral Clustering under Fairness Constraints [56.865810822418744]
We present a new efficient method for fair spectral clustering (Fair SC) by casting the Fair SC problem within the difference of convex functions (DC) framework.<n>We show that each associated subproblem can be solved efficiently, resulting in higher computational efficiency compared to prior work.
arXiv Detail & Related papers (2025-06-09T18:46:27Z) - A Novel Two-Phase Cooperative Co-evolution Framework for Large-Scale Global Optimization with Complex Overlapping [8.839347987566336]
We propose a novel two-phase cooperative co-evolution framework to address large-scale global optimization problems with complex overlapping.
An effective method for decomposing overlapping problems, grounded in their mathematical properties, is embedded within the framework.
Extensive experiments demonstrate that the algorithm instantiated within our framework significantly outperforms existing algorithms.
arXiv Detail & Related papers (2025-03-23T13:21:11Z) - A Greedy Strategy for Graph Cut [95.2841574410968]
We propose a greedy strategy to solve the problem of Graph Cut, called GGC.
It starts from the state where each data sample is regarded as a cluster and dynamically merges the two clusters.
GGC has a nearly linear computational complexity with respect to the number of samples.
arXiv Detail & Related papers (2024-12-28T05:49:42Z) - $ψ$DAG: Projected Stochastic Approximation Iteration for DAG Structure Learning [6.612096312467342]
Learning the structure of Directed A Graphs (DAGs) presents a significant challenge due to the vast search space of possible graphs, which scales with the number of nodes.
Recent advancements have redefined this problem as a continuous optimization task by incorporating differentiable a exponentiallyity constraints.
We present a novel framework for learning DAGs, employing a Approximation approach integrated with Gradient Descent (SGD)-based optimization techniques.
arXiv Detail & Related papers (2024-10-31T12:13:11Z) - An Enhanced Differential Grouping Method for Large-Scale Overlapping Problems [23.17606455044122]
We propose a two-stage enhanced grouping method for large-scale overlapping problems, called OEDG.
In the first stage, OEDG employs a grouping method based on the finite differences principle to identify all subcomponents and shared variables.
In the second stage, we propose two grouping refinement methods, called subcomponent union detection (SUD) and subcomponent detection (SD), to enhance and refine the grouping results.
arXiv Detail & Related papers (2024-04-16T12:38:03Z) - ALEXR: An Optimal Single-Loop Algorithm for Convex Finite-Sum Coupled Compositional Stochastic Optimization [53.14532968909759]
We introduce an efficient single-loop primal-dual block-coordinate algorithm, dubbed ALEXR.
We establish the convergence rates of ALEXR in both convex and strongly convex cases under smoothness and non-smoothness conditions.
We present lower complexity bounds to demonstrate that the convergence rates of ALEXR are optimal among first-order block-coordinate algorithms for the considered class of cFCCO problems.
arXiv Detail & Related papers (2023-12-04T19:00:07Z) - Achieving Sample and Computational Efficient Reinforcement Learning by
Action Space Reduction via Grouping [7.691755449724638]
Reinforcement learning often needs to deal with the exponential growth of states and actions in high-dimensional spaces.
We learn the inherent structure of action-wise similar MDP to appropriately balance the performance degradation versus sample/computational complexity.
arXiv Detail & Related papers (2023-06-22T15:40:10Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - Cooperative coevolutionary hybrid NSGA-II with Linkage Measurement
Minimization for Large-scale Multi-objective optimization [3.274290296343038]
We propose a variable grouping method based on cooperative coevolution for large-scale multi-objective problems (LSMOPs)
For the sub-problem optimization stage, a hybrid NSGA-II with a Gaussian sampling operator based on an estimated convergence point is proposed.
arXiv Detail & Related papers (2022-08-29T08:18:15Z) - Late Fusion Multi-view Clustering via Global and Local Alignment
Maximization [61.89218392703043]
Multi-view clustering (MVC) optimally integrates complementary information from different views to improve clustering performance.
Most of existing approaches directly fuse multiple pre-specified similarities to learn an optimal similarity matrix for clustering.
We propose late fusion MVC via alignment to address these issues.
arXiv Detail & Related papers (2022-08-02T01:49:31Z) - The Benefits of Implicit Regularization from SGD in Least Squares
Problems [116.85246178212616]
gradient descent (SGD) exhibits strong algorithmic regularization effects in practice.
We make comparisons of the implicit regularization afforded by (unregularized) average SGD with the explicit regularization of ridge regression.
arXiv Detail & Related papers (2021-08-10T09:56:47Z) - Harnessing Heterogeneity: Learning from Decomposed Feedback in Bayesian
Modeling [68.69431580852535]
We introduce a novel GP regression to incorporate the subgroup feedback.
Our modified regression has provably lower variance -- and thus a more accurate posterior -- compared to previous approaches.
We execute our algorithm on two disparate social problems.
arXiv Detail & Related papers (2021-07-07T03:57:22Z) - Feature Grouping and Sparse Principal Component Analysis [23.657672812296518]
Grouping and Sparse Principal Analysis (SPCA) is widely used in data processing dimension reduction.
FGSPCA allows loadings to belong to disjoint homogeneous groups, with sparsity as a special case.
arXiv Detail & Related papers (2021-06-25T15:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.