Parabolic Relaxation for Quadratically-constrained Quadratic Programming
-- Part II: Theoretical & Computational Results
- URL: http://arxiv.org/abs/2208.03625v1
- Date: Sun, 7 Aug 2022 02:58:04 GMT
- Title: Parabolic Relaxation for Quadratically-constrained Quadratic Programming
-- Part II: Theoretical & Computational Results
- Authors: Ramtin Madani, Mersedeh Ashraphijuo, Mohsen Kheirandishfard, Alper
Atamturk
- Abstract summary: We introduce a convex relaxation for quadratically-constrained quadratic programs, along with a penalized parabolic relaxation to near near feasible solutions.
We show that starting from a sequential point-to-point solution satisfying certain conditions, the penalized parabolic relaxation convergences satisfies certain conditions Karush-Kuhn optimal regularity problem.
- Score: 6.355764634492975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the first part of this work [32], we introduce a convex parabolic
relaxation for quadratically-constrained quadratic programs, along with a
sequential penalized parabolic relaxation algorithm to recover near-optimal
feasible solutions. In this second part, we show that starting from a feasible
solution or a near-feasible solution satisfying certain regularity conditions,
the sequential penalized parabolic relaxation algorithm convergences to a point
which satisfies Karush-Kuhn-Tucker optimality conditions. Next, we present
numerical experiments on benchmark non-convex QCQP problems as well as
large-scale instances of system identification problem demonstrating the
efficiency of the proposed approach.
Related papers
- Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models [57.52124921268249]
We propose a Trust Sequential Quadratic Programming method to find both first and second-order stationary points.
To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a approximation of the objective subject.
To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature the reduced Hessian matrix.
arXiv Detail & Related papers (2024-09-24T04:39:47Z) - Convex Q Learning in a Stochastic Environment: Extended Version [1.680268810119084]
The paper introduces the first formulation of convex Q-learning for Markov decision processes with function approximation.
The proposed algorithms are convergent and new techniques are introduced to obtain the rate of convergence in a mean-square sense.
The theory is illustrated with an application to a classical inventory control problem.
arXiv Detail & Related papers (2023-09-10T18:24:43Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Fully Stochastic Trust-Region Sequential Quadratic Programming for
Equality-Constrained Optimization Problems [62.83783246648714]
We propose a sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with objectives and deterministic equality constraints.
The algorithm adaptively selects the trust-region radius and, compared to the existing line-search StoSQP schemes, allows us to utilize indefinite Hessian matrices.
arXiv Detail & Related papers (2022-11-29T05:52:17Z) - Parabolic Relaxation for Quadratically-constrained Quadratic Programming
-- Part I: Definitions & Basic Properties [6.355764634492975]
For general quadratically-constrained computation, we propose a relaxation described with quadratic constraints.
The relaxation can be made as strong as a semidefinite (SDP) relaxation.
It can be effective in accelerating algorithms that require a sequence of surrogates.
arXiv Detail & Related papers (2022-08-07T02:44:23Z) - A conditional gradient homotopy method with applications to Semidefinite
Programming [1.6369790794838281]
homotopy-based conditional gradient method for solving convex optimization problems with a large number of simple conic constraints.
Our theoretical complexity is competitive when confronted to state-of-the-art SDP, with the decisive advantage of cheap projection-frees.
arXiv Detail & Related papers (2022-07-07T05:48:27Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - A Feasible Level Proximal Point Method for Nonconvex Sparse Constrained
Optimization [25.73397307080647]
We present a new model of a general convex or non objective machine machine objectives.
We propose an algorithm that solves a constraint with gradually relaxed point levels of each subproblem.
We demonstrate the effectiveness of our new numerical scale problems.
arXiv Detail & Related papers (2020-10-23T05:24:05Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.