Complete Upper Bound Hierarchies for Spectral Minimum in Noncommutative Polynomial Optimization
- URL: http://arxiv.org/abs/2402.02126v2
- Date: Mon, 10 Feb 2025 20:16:36 GMT
- Title: Complete Upper Bound Hierarchies for Spectral Minimum in Noncommutative Polynomial Optimization
- Authors: Igor Klep, Victor Magron, Jurij Volčič,
- Abstract summary: This work focuses on finding the spectral minimum of a noncommutative subject to a finite number of noncommutative constraints.
The Navascu'es-Pironio-Ac'in hierarchy is the noncommutative analog of Lasserre's moment-sum of squares hierarchy.
This paper derives complementary complete hierarchies of upper bounds for the spectral minimum.
- Score: 1.6385815610837167
- License:
- Abstract: This work focuses on finding the spectral minimum (ground state energy) of a noncommutative polynomial subject to a finite number of noncommutative polynomial constraints. Based on the Helton-McCullough Positivstellensatz, the Navascu\'es-Pironio-Ac\'in (NPA) hierarchy is the noncommutative analog of Lasserre's moment-sum of squares hierarchy and provides a sequence of lower bounds converging to the spectral minimum, under mild assumptions on the constraint set. Each lower bound can be obtained by solving a semidefinite program. This paper derives complementary complete hierarchies of upper bounds for the spectral minimum. They are noncommutative analogues of the upper bound hierarchies due to Lasserre for minimizing commutative polynomials over compact sets. Each upper bound is obtained by solving a generalized eigenvalue problem. The derived hierarchies apply to optimization problems in bounded and unbounded operator algebras, as demonstrated on a variety of examples.
Related papers
- The polarization hierarchy for polynomial optimization over convex bodies, with applications to nonnegative matrix rank [0.6963971634605796]
We construct a convergent family of outer approximations for the problem of optimizing functions over convex subject bodies to constraints.
A numerical implementation of the third level of the hierarchy is shown to give rise to a very tight approximation for this problem.
arXiv Detail & Related papers (2024-06-13T18:00:09Z) - Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Minimisation of Polyak-Łojasewicz Functions Using Random Zeroth-Order Oracles [1.5101132008238316]
The framework is based on exploiting a random oracle to estimate the gradient function.
The convergence of the algorithm to a global minimum in the unconstrained case and to a neighbourhood of the global minimum in the constrained case is presented.
arXiv Detail & Related papers (2024-05-15T05:43:16Z) - A Unified Analysis on the Subgradient Upper Bounds for the Subgradient Methods Minimizing Composite Nonconvex, Nonsmooth and Non-Lipschitz Functions [7.972544890243396]
This paper presents a unified analysis for the proximal subgradient method (Prox-SubGrad) type approach.
We are able to relate error-bound conditions, the growth conditions of the subgradients of the objective, and the behavior of the proximal subgradient iterates on some remarkably broad classes of objective functions.
arXiv Detail & Related papers (2023-08-30T23:34:11Z) - High-Probability Bounds for Stochastic Optimization and Variational
Inequalities: the Case of Unbounded Variance [59.211456992422136]
We propose algorithms with high-probability convergence results under less restrictive assumptions.
These results justify the usage of the considered methods for solving problems that do not fit standard functional classes in optimization.
arXiv Detail & Related papers (2023-02-02T10:37:23Z) - State polynomials: positivity, optimization and nonlinear Bell
inequalities [3.9692590090301683]
This paper introduces states in noncommuting variables and formal states of their products.
It shows that states, positive over all and matricial states, are sums of squares with denominators.
It is also established that avinetengle Kritivsatz fails to hold in the state setting.
arXiv Detail & Related papers (2023-01-29T18:52:21Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - Asymptotic Convergence Rate of Alternating Minimization for Rank One
Matrix Completion [15.321579527891457]
We study alternating minimization for completion in the simplest possible setting.
We bound the convergence rate by the variational characterization of eigenvalues of a reversible consensus problem.
This leads to an upper bound on the rate in terms of number of nodes as well as the largest degree of the graph of revealed entries.
arXiv Detail & Related papers (2020-08-11T19:56:35Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and
Non-Asymptotic Concentration [115.1954841020189]
We study the inequality and non-asymptotic properties of approximation procedures with Polyak-Ruppert averaging.
We prove a central limit theorem (CLT) for the averaged iterates with fixed step size and number of iterations going to infinity.
arXiv Detail & Related papers (2020-04-09T17:54:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.