Sparse Polynomial Optimization: Theory and Practice
- URL: http://arxiv.org/abs/2208.11158v2
- Date: Thu, 25 Aug 2022 08:33:03 GMT
- Title: Sparse Polynomial Optimization: Theory and Practice
- Authors: Victor Magron and Jie Wang
- Abstract summary: Book presents several efforts to tackle this challenge with important scientific implications.
It provides alternative optimization schemes that scale well in terms of computational complexity.
We present sparsity-exploiting hierarchies of relaxations, for either unconstrained or constrained problems.
- Score: 5.27013884159732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of minimizing a polynomial over a set of polynomial inequalities
is an NP-hard non-convex problem. Thanks to powerful results from real
algebraic geometry, one can convert this problem into a nested sequence of
finite-dimensional convex problems. At each step of the associated hierarchy,
one needs to solve a fixed size semidefinite program, which can be in turn
solved with efficient numerical tools. On the practical side however, there is
no-free lunch and such optimization methods usually encompass severe
scalability issues. Fortunately, for many applications, we can look at the
problem in the eyes and exploit the inherent data structure arising from the
cost and constraints describing the problem, for instance sparsity or
symmetries.
This book presents several research efforts to tackle this scientific
challenge with important computational implications, and provides the
development of alternative optimization schemes that scale well in terms of
computational complexity, at least in some identified class of problems. The
presented algorithmic framework in this book mainly exploits the sparsity
structure of the input data to solve large-scale polynomial optimization
problems. We present sparsity-exploiting hierarchies of relaxations, for either
unconstrained or constrained problems. By contrast with the dense hierarchies,
they provide faster approximation of the solution in practice but also come
with the same theoretical convergence guarantees. Our framework is not
restricted to static polynomial optimization, and we expose hierarchies of
approximations for values of interest arising from the analysis of dynamical
systems. We also present various extensions to problems involving noncommuting
variables, e.g., matrices of arbitrary size or quantum physic operators.
Related papers
- Structured Regularization for Constrained Optimization on the SPD Manifold [1.1126342180866644]
We introduce a class of structured regularizers, based on symmetric gauge functions, which allow for solving constrained optimization on the SPD manifold with faster unconstrained methods.
We show that our structured regularizers can be chosen to preserve or induce desirable structure, in particular convexity and "difference of convex" structure.
arXiv Detail & Related papers (2024-10-12T22:11:22Z) - Primal Methods for Variational Inequality Problems with Functional Constraints [25.261426717550293]
We propose a primal method, termed Constrained Gradient Method (CGM), for addressing functional constrained variational inequality problems.
We establish a non-asymptotic convergence analysis of the algorithm for variational inequality problems with monotone operators under smooth constraints.
Our algorithms match the complexity of projection-based methods in terms of operator queries for both monotone and strongly monotone settings.
arXiv Detail & Related papers (2024-03-19T16:03:03Z) - The Complexity of Optimizing Atomic Congestion [14.845310803203724]
Atomic congestion games are a classic topic in network design, routing, and algorithmic game theory.
We show that the problem remains highly intractable even on extremely simple networks.
We conclude by extending our analysis towards the (even more challenging) min-max variant of the problem.
arXiv Detail & Related papers (2023-12-15T21:31:30Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Numerical Methods for Convex Multistage Stochastic Optimization [86.45244607927732]
We focus on optimisation programming (SP), Optimal Control (SOC) and Decision Processes (MDP)
Recent progress in solving convex multistage Markov problems is based on cutting planes approximations of the cost-to-go functions of dynamic programming equations.
Cutting plane type methods can handle multistage problems with a large number of stages, but a relatively smaller number of state (decision) variables.
arXiv Detail & Related papers (2023-03-28T01:30:40Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Sparse resultant based minimal solvers in computer vision and their
connection with the action matrix [17.31412310131552]
We show that for some camera geometry problems our extra-based method leads to smaller and more stable solvers than the state-of-the-art Grobner basis-based solvers.
It provides a competitive alternative to popularner basis-based methods for minimal problems in computer vision.
arXiv Detail & Related papers (2023-01-16T14:25:19Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - On a class of geodesically convex optimization problems solved via
Euclidean MM methods [50.428784381385164]
We show how a difference of Euclidean convexization functions can be written as a difference of different types of problems in statistics and machine learning.
Ultimately, we helps the broader broader the broader the broader the broader the work.
arXiv Detail & Related papers (2022-06-22T23:57:40Z) - Complexity continuum within Ising formulation of NP problems [0.0]
Minimisation of the Ising Hamiltonian is known to be NP-hard problem for certain interaction matrix classes.
We propose to identify computationally simple instances with an optimisation simplicity criterion'
Such simplicity can be found for a wide range of models from spin glasses to k-regular maximum cut problems.
arXiv Detail & Related papers (2020-08-02T11:36:38Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.