A two-phase-ACO algorithm for solving nonlinear optimization problems subjected to fuzzy relational equations
- URL: http://arxiv.org/abs/2405.14888v1
- Date: Fri, 17 May 2024 09:24:07 GMT
- Title: A two-phase-ACO algorithm for solving nonlinear optimization problems subjected to fuzzy relational equations
- Authors: Amin Ghodousian, Sara Zal,
- Abstract summary: In this paper, we investigate nonlinear optimization problems whose constraints are defined as fuzzy relational equations (FRE) with feasible-min composition.
The ability of an ant colony optimization algorithm (denoted by ACOR) to tackle algorithm problems and that of continuous ant colony optimization algorithm (denoted by ACO) to solve continuous optimization problems are discussed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we investigate nonlinear optimization problems whose constraints are defined as fuzzy relational equations (FRE) with max-min composition. Since the feasible solution set of the FRE is often a non-convex set and the resolution of the FREs is an NP-hard problem, conventional nonlinear approaches may involve high computational complexity. Based on the theoretical aspects of the problem, an algorithm (called FRE-ACO algorithm) is presented which benefits from the structural properties of the FREs, the ability of discrete ant colony optimization algorithm (denoted by ACO) to tackle combinatorial problems, and that of continuous ant colony optimization algorithm (denoted by ACOR) to solve continuous optimization problems. In the current method, the fundamental ideas underlying ACO and ACOR are combined and form an efficient approach to solve the nonlinear optimization problems constrained with such non-convex regions. Moreover, FRE-ACO algorithm preserves the feasibility of new generated solutions without having to initially find the minimal solutions of the feasible region or check the feasibility after generating the new solutions. FRE-ACO algorithm has been compared with some related works proposed for solving nonlinear optimization problems with respect to maxmin FREs. The obtained results demonstrate that the proposed algorithm has a higher convergence rate and requires a less number of function evaluations compared to other considered algorithms.
Related papers
- ALEXR: An Optimal Single-Loop Algorithm for Convex Finite-Sum Coupled Compositional Stochastic Optimization [53.14532968909759]
We introduce an efficient single-loop primal-dual block-coordinate algorithm, dubbed ALEXR.
We establish the convergence rates of ALEXR in both convex and strongly convex cases under smoothness and non-smoothness conditions.
We present lower complexity bounds to demonstrate that the convergence rates of ALEXR are optimal among first-order block-coordinate algorithms for the considered class of cFCCO problems.
arXiv Detail & Related papers (2023-12-04T19:00:07Z) - Self-supervised Equality Embedded Deep Lagrange Dual for Approximate Constrained Optimization [5.412875005737914]
We propose deeprange dual embedding (DeepLDE) as a fast optimal approximators incorporating neural networks (NNs)
We prove the convergence of DeepLDE and primal, nondual-learning method to impose inequality constraints.
We show that the proposed DeepLDE solutions achieve the optimal Lag gap among all the smallest NN-based approaches.
arXiv Detail & Related papers (2023-06-11T13:19:37Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - A socio-physics based hybrid metaheuristic for solving complex
non-convex constrained optimization problems [0.19662978733004596]
It is necessary to critically validate the proposed constrained optimization techniques.
The search is different as it involves a large number of linear constraints and non-type inequality.
The first CI-based algorithm incorporates a self-adaptive penalty approach.
The second algorithm combines CI-SAPF with the referred properties of the future.
arXiv Detail & Related papers (2022-09-02T07:46:46Z) - First-Order Algorithms for Nonlinear Generalized Nash Equilibrium
Problems [88.58409977434269]
We consider the problem of computing an equilibrium in a class of nonlinear generalized Nash equilibrium problems (NGNEPs)
Our contribution is to provide two simple first-order algorithmic frameworks based on the quadratic penalty method and the augmented Lagrangian method.
We provide nonasymptotic theoretical guarantees for these algorithms.
arXiv Detail & Related papers (2022-04-07T00:11:05Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - A Parameter-free Algorithm for Convex-concave Min-max Problems [33.39558541452866]
cave-free optimization algorithms refer to algorithms whose convergence rate is optimal with respect to the initial point without any learning rate to tune.
As a by-product, we utilize the parameter-free algorithm to design a new algorithm, which obtains fast rates for min-max problems with a growth condition.
arXiv Detail & Related papers (2021-02-27T18:10:06Z) - CSCF: a chaotic sine cosine firefly Algorithm for practical application
problems [0.0]
Several optimization algorithms namely firefly algorithm, sine cosine algorithm, particle swarm optimization algorithm have few drawbacks such as computational complexity, convergence speed etc.
This paper develops a novel Chaotic Sine Cosine Firefly (CSCF) algorithm with numerous variants to solve optimization problems.
arXiv Detail & Related papers (2020-11-20T08:54:28Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Quantum approximate algorithm for NP optimization problems with
constraints [12.294570891467604]
In this paper, we formalize different constraint types to equalities, linear inequalities, and arbitrary form.
Based on this, we propose constraint-encoding schemes well-fitting into the QAOA framework for solving NP optimization problems.
The implemented algorithms demonstrate the effectiveness and efficiency of the proposed scheme.
arXiv Detail & Related papers (2020-02-01T04:45:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.