Tractable hierarchies of convex relaxations for polynomial optimization
on the nonnegative orthant
- URL: http://arxiv.org/abs/2209.06175v1
- Date: Tue, 13 Sep 2022 17:23:50 GMT
- Title: Tractable hierarchies of convex relaxations for polynomial optimization
on the nonnegative orthant
- Authors: Ngoc Hoang Anh Mai and Victor Magron and Jean-Bernard Lasserre and
Kim-Chuan Toh
- Abstract summary: We consider optimization problems (POP) on a semialgebraic set contained in the nonnegative orthant.
We propose a hierarchy of semidefinite relaxations based on the extension of P'olya's Positivsatz by Dickinson-Povh.
- Score: 7.847440192956767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider polynomial optimization problems (POP) on a semialgebraic set
contained in the nonnegative orthant (every POP on a compact set can be put in
this format by a simple translation of the origin). Such a POP can be converted
to an equivalent POP by squaring each variable. Using even symmetry and the
concept of factor width, we propose a hierarchy of semidefinite relaxations
based on the extension of P\'olya's Positivstellensatz by Dickinson-Povh. As
its distinguishing and crucial feature, the maximal matrix size of each
resulting semidefinite relaxation can be chosen arbitrarily and in addition, we
prove that the sequence of values returned by the new hierarchy converges to
the optimal value of the original POP at the rate $O(\varepsilon^{-c})$ if the
semialgebraic set has nonempty interior. When applied to (i) robustness
certification of multi-layer neural networks and (ii) computation of positive
maximal singular values, our method based on P\'olya's Positivstellensatz
provides better bounds and runs several hundred times faster than the standard
Moment-SOS hierarchy.
Related papers
- Discrete Aware Matrix Completion via Convexized $\ell_0$-Norm Approximation [7.447205347712796]
We consider a novel algorithm for the completion of partially observed low-rank matrices in a structured setting.
The proposed low-rank matrix completion (MC) method is an improved variation of state-of-the-art (SotA) discrete aware matrix completion method.
arXiv Detail & Related papers (2024-05-03T13:54:59Z) - Towards Efficient and Optimal Covariance-Adaptive Algorithms for Combinatorial Semi-Bandits [12.674929126684528]
We address the problem of semi-bandits, where a player selects among P actions from the power set of a set containing d base items.
We show that our approach efficiently leverages the semi-bandit feedback and outperforms bandit feedback approaches.
arXiv Detail & Related papers (2024-02-23T08:07:54Z) - Upper bound hierarchies for noncommutative polynomial optimization [1.6385815610837167]
This work focuses on minimizing the eigenvalue of a noncommutative to a finite number of noncommutative constraints.
We derive converging hierarchies of upper bounds due to Lasserre problem for minimizing compact sets.
arXiv Detail & Related papers (2024-02-03T11:53:57Z) - A New Random Reshuffling Method for Nonsmooth Nonconvex Finite-sum Optimization [6.314057999212246]
Random reshuffling techniques are used in large-scale applications, such as neural networks.
In this paper, we show that the random reshuffling-type iterations generated by norm-PRR converge to a linear setting.
Finally, we derive last convergence rates that can be applied to the proposed approach.
arXiv Detail & Related papers (2023-12-02T07:12:00Z) - Polynomial-Time Solutions for ReLU Network Training: A Complexity
Classification via Max-Cut and Zonotopes [70.52097560486683]
We prove that the hardness of approximation of ReLU networks not only mirrors the complexity of the Max-Cut problem but also, in certain special cases, exactly corresponds to it.
In particular, when $epsilonleqsqrt84/83-1approx 0.006$, we show that it is NP-hard to find an approximate global dataset of the ReLU network objective with relative error $epsilon$ with respect to the objective value.
arXiv Detail & Related papers (2023-11-18T04:41:07Z) - A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization [53.044526424637866]
In this paper we consider finding an approximate second-order stationary point (SOSP) that minimizes a twice different subject general non conic optimization.
In particular, we propose a Newton-CG based-augmentedconjugate method for finding an approximate SOSP.
arXiv Detail & Related papers (2023-01-10T20:43:29Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Log-based Sparse Nonnegative Matrix Factorization for Data
Representation [55.72494900138061]
Nonnegative matrix factorization (NMF) has been widely studied in recent years due to its effectiveness in representing nonnegative data with parts-based representations.
We propose a new NMF method with log-norm imposed on the factor matrices to enhance the sparseness.
A novel column-wisely sparse norm, named $ell_2,log$-(pseudo) norm, is proposed to enhance the robustness of the proposed method.
arXiv Detail & Related papers (2022-04-22T11:38:10Z) - DASHA: Distributed Nonconvex Optimization with Communication
Compression, Optimal Oracle Complexity, and No Client Synchronization [77.34726150561087]
We develop and analyze DASHA: a new family of methods for noneps distributed optimization problems.
Unlike MARINA, the new methods DASHA, DASHA-MVR send compressed vectors only and never synchronize the nodes, which makes them more practical for learning.
arXiv Detail & Related papers (2022-02-02T20:10:40Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - Simple and Near-Optimal MAP Inference for Nonsymmetric DPPs [3.3504365823045044]
We study the problem of maximum a posteriori (MAP) inference for determinantal point processes defined by a nonsymmetric kernel matrix.
We obtain the first multiplicative approximation guarantee for this problem using local search.
Our approximation factor of $kO(k)$ is nearly tight, and we show theoretically and experimentally that it compares favorably to the state-of-the-art methods for this problem.
arXiv Detail & Related papers (2021-02-10T09:34:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.