A New Scope and Domain Measure Comparison Method for Global Convergence Analysis in Evolutionary Computation
- URL: http://arxiv.org/abs/2505.04089v1
- Date: Wed, 07 May 2025 03:04:18 GMT
- Title: A New Scope and Domain Measure Comparison Method for Global Convergence Analysis in Evolutionary Computation
- Authors: Liu-Yue Luo, Zhi-Hui Zhan, Kay Chen Tan, Jun Zhang,
- Abstract summary: We propose a new scope and domain measure comparison (SDMC) method for analyzing the global convergence of EC algorithms.<n>Unlike traditional methods, the SDMC method is straightforward, bypasses Markov chain modeling, and minimizes errors from misapplication.<n>We apply SDMC to two algorithm types that are unsuitable for traditional methods, confirming its effectiveness in global convergence analysis.
- Score: 23.43738935769317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convergence analysis is a fundamental research topic in evolutionary computation (EC). The commonly used analysis method models the EC algorithm as a homogeneous Markov chain for analysis, which is not always suitable for different EC variants, and also sometimes causes misuse and confusion due to their complex process. In this article, we categorize the existing researches on convergence analysis in EC algorithms into stable convergence and global convergence, and then prove that the conditions for these two convergence properties are somehow mutually exclusive. Inspired by this proof, we propose a new scope and domain measure comparison (SDMC) method for analyzing the global convergence of EC algorithms and provide a rigorous proof of its necessity and sufficiency as an alternative condition. Unlike traditional methods, the SDMC method is straightforward, bypasses Markov chain modeling, and minimizes errors from misapplication as it only focuses on the measure of the algorithm's search scope. We apply SDMC to two algorithm types that are unsuitable for traditional methods, confirming its effectiveness in global convergence analysis. Furthermore, we apply the SDMC method to explore the gene targeting mechanism's impact on the global convergence in large-scale global optimization, deriving insights into how to design EC algorithms that guarantee global convergence and exploring how theoretical analysis can guide EC algorithm design.
Related papers
- A Fresh Look at Generalized Category Discovery through Non-negative Matrix Factorization [83.12938977698988]
Generalized Category Discovery (GCD) aims to classify both base and novel images using labeled base data.
Current approaches inadequately address the intrinsic optimization of the co-occurrence matrix $barA$ based on cosine similarity.
We propose a Non-Negative Generalized Category Discovery (NN-GCD) framework to address these deficiencies.
arXiv Detail & Related papers (2024-10-29T07:24:11Z) - Revisiting Extragradient-Type Methods -- Part 1: Generalizations and Sublinear Convergence Rates [6.78476672849813]
This paper presents a comprehensive analysis of the well-known extragradient (EG) method for solving both equations and inclusions.
We analyze both sublinear best-iterate'' and last-iterate'' convergence rates for the entire class of algorithms.
We extend our EG framework above to monotone'' inclusions, introducing a new class of algorithms and its corresponding convergence results.
arXiv Detail & Related papers (2024-09-25T12:14:05Z) - Deep Learning for Computing Convergence Rates of Markov Chains [0.0]
Deep Contractive Drift Calculator (DCDC) is first general-purpose algorithm for bounding the convergence of Markov chains to stationarity in Wasserstein distance.<n>We show that DCDC can generate convergence bounds for realistic Markov chains arising from processing networks as well as constant step-size optimization.
arXiv Detail & Related papers (2024-05-30T19:26:51Z) - A Hierarchical Federated Learning Approach for the Internet of Things [3.28418927821443]
We present a novel federated learning solution, QHetFed, suitable for large-scale Internet of Things deployments.<n>We show that QHetFed consistently achieves high learning accuracy and significantly outperforms other hierarchical algorithms.
arXiv Detail & Related papers (2024-03-03T15:40:24Z) - A unified consensus-based parallel ADMM algorithm for high-dimensional
regression with combined regularizations [3.280169909938912]
parallel alternating multipliers (ADMM) is widely recognized for its effectiveness in handling large-scale distributed datasets.
The proposed algorithms serve to demonstrate the reliability, stability, and scalability of a financial example.
arXiv Detail & Related papers (2023-11-21T03:30:38Z) - Stability and Generalization of the Decentralized Stochastic Gradient
Descent Ascent Algorithm [80.94861441583275]
We investigate the complexity of the generalization bound of the decentralized gradient descent (D-SGDA) algorithm.
Our results analyze the impact of different top factors on the generalization of D-SGDA.
We also balance it with the generalization to obtain the optimal convex-concave setting.
arXiv Detail & Related papers (2023-10-31T11:27:01Z) - Regularization and Optimization in Model-Based Clustering [4.096453902709292]
k-means algorithm variants essentially fit a mixture of identical spherical Gaussians to data that vastly deviates from such a distribution.
We develop more effective optimization algorithms for general GMMs, and we combine these algorithms with regularization strategies that avoid overfitting.
These results shed new light on the current status quo between GMM and k-means methods and suggest the more frequent use of general GMMs for data exploration.
arXiv Detail & Related papers (2023-02-05T18:22:29Z) - The Dynamics of Riemannian Robbins-Monro Algorithms [101.29301565229265]
We propose a family of Riemannian algorithms generalizing and extending the seminal approximation framework of Robbins and Monro.
Compared to their Euclidean counterparts, Riemannian algorithms are much less understood due to lack of a global linear structure on the manifold.
We provide a general template of almost sure convergence results that mirrors and extends the existing theory for Euclidean Robbins-Monro schemes.
arXiv Detail & Related papers (2022-06-14T12:30:11Z) - First-Order Algorithms for Nonlinear Generalized Nash Equilibrium
Problems [88.58409977434269]
We consider the problem of computing an equilibrium in a class of nonlinear generalized Nash equilibrium problems (NGNEPs)
Our contribution is to provide two simple first-order algorithmic frameworks based on the quadratic penalty method and the augmented Lagrangian method.
We provide nonasymptotic theoretical guarantees for these algorithms.
arXiv Detail & Related papers (2022-04-07T00:11:05Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Jointly Modeling and Clustering Tensors in High Dimensions [6.072664839782975]
We consider the problem of jointly benchmarking and clustering of tensors.
We propose an efficient high-maximization algorithm that converges geometrically to a neighborhood that is within statistical precision.
arXiv Detail & Related papers (2021-04-15T21:06:16Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - A Dynamical Systems Approach for Convergence of the Bayesian EM
Algorithm [59.99439951055238]
We show how (discrete-time) Lyapunov stability theory can serve as a powerful tool to aid, or even lead, in the analysis (and potential design) of optimization algorithms that are not necessarily gradient-based.
The particular ML problem that this paper focuses on is that of parameter estimation in an incomplete-data Bayesian framework via the popular optimization algorithm known as maximum a posteriori expectation-maximization (MAP-EM)
We show that fast convergence (linear or quadratic) is achieved, which could have been difficult to unveil without our adopted S&C approach.
arXiv Detail & Related papers (2020-06-23T01:34:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.