Parabolic Relaxation for Quadratically-constrained Quadratic Programming
-- Part I: Definitions & Basic Properties
- URL: http://arxiv.org/abs/2208.03622v1
- Date: Sun, 7 Aug 2022 02:44:23 GMT
- Title: Parabolic Relaxation for Quadratically-constrained Quadratic Programming
-- Part I: Definitions & Basic Properties
- Authors: Ramtin Madani, Mersedeh Ashraphijuo, Mohsen Kheirandishfard, Alper
Atamturk
- Abstract summary: For general quadratically-constrained computation, we propose a relaxation described with quadratic constraints.
The relaxation can be made as strong as a semidefinite (SDP) relaxation.
It can be effective in accelerating algorithms that require a sequence of surrogates.
- Score: 6.355764634492975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For general quadratically-constrained quadratic programming (QCQP), we
propose a parabolic relaxation described with convex quadratic constraints. An
interesting property of the parabolic relaxation is that the original
non-convex feasible set is contained on the boundary of the parabolic
relaxation. Under certain assumptions, this property enables one to recover
near-optimal feasible points via objective penalization. Moreover, through an
appropriate change of coordinates that requires a one-time computation of an
optimal basis, the easier-to-solve parabolic relaxation can be made as strong
as a semidefinite programming (SDP) relaxation, which can be effective in
accelerating algorithms that require solving a sequence of convex surrogates.
The majority of theoretical and computational results are given in the next
part of this work [57].
Related papers
- Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models [57.52124921268249]
We propose a Trust Sequential Quadratic Programming method to find both first and second-order stationary points.
To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a approximation of the objective subject.
To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature the reduced Hessian matrix.
arXiv Detail & Related papers (2024-09-24T04:39:47Z) - Double Duality: Variational Primal-Dual Policy Optimization for
Constrained Reinforcement Learning [132.7040981721302]
We study the Constrained Convex Decision Process (MDP), where the goal is to minimize a convex functional of the visitation measure.
Design algorithms for a constrained convex MDP faces several challenges, including handling the large state space.
arXiv Detail & Related papers (2024-02-16T16:35:18Z) - Convex Q Learning in a Stochastic Environment: Extended Version [1.680268810119084]
The paper introduces the first formulation of convex Q-learning for Markov decision processes with function approximation.
The proposed algorithms are convergent and new techniques are introduced to obtain the rate of convergence in a mean-square sense.
The theory is illustrated with an application to a classical inventory control problem.
arXiv Detail & Related papers (2023-09-10T18:24:43Z) - Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and
Optimal Algorithms [64.10576998630981]
We show the first tight characterization of the optimal Hessian-dependent sample complexity.
A Hessian-independent algorithm universally achieves the optimal sample complexities for all Hessian instances.
The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions.
arXiv Detail & Related papers (2023-06-21T17:03:22Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Parabolic Relaxation for Quadratically-constrained Quadratic Programming
-- Part II: Theoretical & Computational Results [6.355764634492975]
We introduce a convex relaxation for quadratically-constrained quadratic programs, along with a penalized parabolic relaxation to near near feasible solutions.
We show that starting from a sequential point-to-point solution satisfying certain conditions, the penalized parabolic relaxation convergences satisfies certain conditions Karush-Kuhn optimal regularity problem.
arXiv Detail & Related papers (2022-08-07T02:58:04Z) - A conditional gradient homotopy method with applications to Semidefinite
Programming [1.6369790794838281]
homotopy-based conditional gradient method for solving convex optimization problems with a large number of simple conic constraints.
Our theoretical complexity is competitive when confronted to state-of-the-art SDP, with the decisive advantage of cheap projection-frees.
arXiv Detail & Related papers (2022-07-07T05:48:27Z) - A Stochastic Composite Augmented Lagrangian Method For Reinforcement
Learning [9.204659134755795]
We consider the linear programming (LP) formulation for deep reinforcement learning.
The augmented Lagrangian method suffers the double-sampling obstacle in solving the LP.
A deep parameterized augment Lagrangian method is proposed.
arXiv Detail & Related papers (2021-05-20T13:08:06Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron
Relaxations for Neural Network Verification [11.10637926254491]
We improve the effectiveness of propagation- and linear-optimization-based neural network verification algorithms with a new tightened convex relaxation for ReLU neurons.
We design two-time algorithms for neural network verification: a linear-programming-based algorithm that leverages the full power of our relaxation, and a fast propagation algorithm that generalizes existing approaches.
arXiv Detail & Related papers (2020-06-24T22:16:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.