Continuation Newton methods with deflation techniques for global
optimization problems
- URL: http://arxiv.org/abs/2107.13864v6
- Date: Wed, 13 Dec 2023 08:35:17 GMT
- Title: Continuation Newton methods with deflation techniques for global
optimization problems
- Authors: Xin-long Luo, Hang Xiao and Sen Zhang
- Abstract summary: A global minimum point of an optimization problem is of interest in engineering.
In this article, we consider a new memetic algorithm for this nonlinear largescale problem.
According to our numerical experiments, new algorithm works well for unconstrained unconstrained problems.
- Score: 3.705839280172101
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The global minimum point of an optimization problem is of interest in
engineering fields and it is difficult to be found, especially for a nonconvex
large-scale optimization problem. In this article, we consider a new memetic
algorithm for this problem. That is to say, we use the continuation Newton
method with the deflation technique to find multiple stationary points of the
objective function and use those found stationary points as the initial seeds
of the evolutionary algorithm, other than the random initial seeds of the known
evolutionary algorithms. Meanwhile, in order to retain the usability of the
derivative-free method and the fast convergence of the gradient-based method,
we use the automatic differentiation technique to compute the gradient and
replace the Hessian matrix with its finite difference approximation. According
to our numerical experiments, this new algorithm works well for unconstrained
optimization problems and finds their global minima efficiently, in comparison
to the other representative global optimization methods such as the multi-start
methods (the built-in subroutine GlobalSearch.m of MATLAB R2021b, GLODS and
VRBBO), the branch-and-bound method (Couenne, a state-of-the-art open-source
solver for mixed integer nonlinear programming problems), and the
derivative-free algorithms (CMA-ES and MCS).
Related papers
- Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Infeasible Deterministic, Stochastic, and Variance-Reduction Algorithms for Optimization under Orthogonality Constraints [9.301728976515255]
This article provides new practical and theoretical developments for the landing algorithm.
First, the method is extended to the Stiefel manifold.
We also consider variance reduction algorithms when the cost function is an average of many functions.
arXiv Detail & Related papers (2023-03-29T07:36:54Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - An Algebraically Converging Stochastic Gradient Descent Algorithm for
Global Optimization [14.336473214524663]
A key component in the algorithm is the randomness based on the value of the objective function.
We prove the convergence of the algorithm with an algebra and tuning in the parameter space.
We present several numerical examples to demonstrate the efficiency and robustness of the algorithm.
arXiv Detail & Related papers (2022-04-12T16:27:49Z) - A framework for bilevel optimization that enables stochastic and global
variance reduction algorithms [17.12280360174073]
Bilevel optimization is a problem of minimizing a value function which involves the arg-minimum of another function.
We introduce a novel framework, in which the solution of the inner problem, the solution of the linear system, and the main variable evolve at the same time.
We demonstrate that SABA, an adaptation of the celebrated SAGA algorithm in our framework, has $O(frac1T)$ convergence rate, and that it achieves linear convergence under Polyak-Lojasciewicz assumption.
arXiv Detail & Related papers (2022-01-31T18:17:25Z) - A Granular Sieving Algorithm for Deterministic Global Optimization [6.01919376499018]
A gradient-free deterministic method is developed to solve global optimization problems for Lipschitz continuous functions.
The method can be regarded as granular sieving with synchronous analysis in both the domain and range of the objective function.
arXiv Detail & Related papers (2021-07-14T10:03:03Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.