Polynomial Optimization: Enhancing RLT relaxations with Conic
Constraints
- URL: http://arxiv.org/abs/2208.05608v1
- Date: Thu, 11 Aug 2022 02:13:04 GMT
- Title: Polynomial Optimization: Enhancing RLT relaxations with Conic
Constraints
- Authors: Brais Gonz\'alez-Rodr\'iguez, Ra\'ul Alvite-Paz\'o, Samuel
Alvite-Paz\'o, Bissan Ghaddar, Julio Gonz\'alez-D\'iaz
- Abstract summary: Conic optimization has emerged as a powerful tool for designing tractable and guaranteed algorithms for non-scale problems.
We investigate the strengthening of the RLT relaxations of optimization problems through the addition of nine different types of constraints.
We show how to design these variants and their performance with respect to each other and with respect to the standard RLT relaxations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conic optimization has recently emerged as a powerful tool for designing
tractable and guaranteed algorithms for non-convex polynomial optimization
problems. On the one hand, tractability is crucial for efficiently solving
large-scale problems and, on the other hand, strong bounds are needed to ensure
high quality solutions. In this research, we investigate the strengthening of
RLT relaxations of polynomial optimization problems through the addition of
nine different types of constraints that are based on linear, second-order
cone, and semidefinite programming to solve to optimality the instances of well
established test sets of polynomial optimization problems. We describe how to
design these conic constraints and their performance with respect to each other
and with respect to the standard RLT relaxations. Our first finding is that the
different variants of nonlinear constraints (second-order cone and
semidefinite) are the best performing ones in around $50\%$ of the instances.
Additionally, we present a machine learning approach to decide on the most
suitable constraints to add for a given instance. The computational results
show that the machine learning approach significantly outperforms each and
every one of the nine individual approaches.
Related papers
- Optimized QUBO formulation methods for quantum computing [0.4999814847776097]
We show how to apply our techniques in case of an NP-hard optimization problem inspired by a real-world financial scenario.
We follow by submitting instances of this problem to two D-wave quantum annealers, comparing the performances of our novel approach with the standard methods used in these scenarios.
arXiv Detail & Related papers (2024-06-11T19:59:05Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Dynamic Incremental Optimization for Best Subset Selection [15.8362578568708]
Best subset selection is considered the gold standard for many learning problems.
An efficient subset-dual algorithm is developed based on the primal and dual problem structures.
arXiv Detail & Related papers (2024-02-04T02:26:40Z) - Global Optimization: A Machine Learning Approach [7.052596485478637]
Bertsimas and Ozturk (2023) proposed OCTHaGOn as a way of solving black-box global optimization problems.
We provide extensions to this approach by approximating the original problem using other MIO-representable ML models.
We show improvements in solution feasibility and optimality in the majority of instances.
arXiv Detail & Related papers (2023-11-03T06:33:38Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Constraint Programming to Discover One-Flip Local Optima of Quadratic
Unconstrained Binary Optimization Problems [0.5439020425819]
QUBO annealers as well as other solution approaches benefit from starting with a diverse set of solutions with local optimality.
This paper presents a new method for generating a set of one-flip local optima leveraging constraint programming.
arXiv Detail & Related papers (2021-04-04T22:55:25Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Unsupervised Deep Learning for Optimizing Wireless Systems with
Instantaneous and Statistic Constraints [29.823814915538463]
We establish a unified framework of using unsupervised deep learning to solve both kinds of problems with both instantaneous and statistic constraints.
We show that unsupervised learning outperforms supervised learning in terms of violation probability and approximation accuracy of the optimal policy.
arXiv Detail & Related papers (2020-05-30T13:37:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.