Hardness of Random Optimization Problems for Boolean Circuits,
Low-Degree Polynomials, and Langevin Dynamics
- URL: http://arxiv.org/abs/2004.12063v2
- Date: Wed, 26 Jan 2022 15:34:19 GMT
- Title: Hardness of Random Optimization Problems for Boolean Circuits,
Low-Degree Polynomials, and Langevin Dynamics
- Authors: David Gamarnik, Aukosh Jagannath, Alexander S. Wein
- Abstract summary: We show that families of algorithms fail to produce nearly optimal solutions with high probability.
For the case of Boolean circuits, our results improve the state-of-the-art bounds known in circuit complexity theory.
- Score: 78.46689176407936
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of finding nearly optimal solutions of optimization
problems with random objective functions. Two concrete problems we consider are
(a) optimizing the Hamiltonian of a spherical or Ising $p$-spin glass model,
and (b) finding a large independent set in a sparse Erd\H{o}s-R\'{e}nyi graph.
The following families of algorithms are considered: (a) low-degree polynomials
of the input; (b) low-depth Boolean circuits; (c) the Langevin dynamics
algorithm. We show that these families of algorithms fail to produce nearly
optimal solutions with high probability. For the case of Boolean circuits, our
results improve the state-of-the-art bounds known in circuit complexity theory
(although we consider the search problem as opposed to the decision problem).
Our proof uses the fact that these models are known to exhibit a variant of
the overlap gap property (OGP) of near-optimal solutions. Specifically, for
both models, every two solutions whose objectives are above a certain threshold
are either close or far from each other. The crux of our proof is that the
classes of algorithms we consider exhibit a form of stability. We show by an
interpolation argument that stable algorithms cannot overcome the OGP barrier.
The stability of Langevin dynamics is an immediate consequence of the
well-posedness of stochastic differential equations. The stability of
low-degree polynomials and Boolean circuits is established using tools from
Gaussian and Boolean analysis -- namely hypercontractivity and total influence,
as well as a novel lower bound for random walks avoiding certain subsets. In
the case of Boolean circuits, the result also makes use of
Linal-Mansour-Nisan's classical theorem. Our techniques apply more broadly to
low influence functions and may apply more generally.
Related papers
- Sum-of-Squares inspired Quantum Metaheuristic for Polynomial Optimization with the Hadamard Test and Approximate Amplitude Constraints [76.53316706600717]
Recently proposed quantum algorithm arXiv:2206.14999 is based on semidefinite programming (SDP)
We generalize the SDP-inspired quantum algorithm to sum-of-squares.
Our results show that our algorithm is suitable for large problems and approximate the best known classicals.
arXiv Detail & Related papers (2024-08-14T19:04:13Z) - Accelerated Stochastic Min-Max Optimization Based on Bias-corrected Momentum [30.01198677588252]
First-order algorithms require at least $mathcalO(varepsilonepsilon-4)$ complexity to find an $varepsilon-stationary point.
We introduce novel momentum algorithms utilizing efficient variable complexity.
The effectiveness of the method is validated through robust logistic regression using real-world datasets.
arXiv Detail & Related papers (2024-06-18T20:14:52Z) - A quantum central path algorithm for linear optimization [5.450016817940232]
We propose a novel quantum algorithm for solving linear optimization problems by quantum-mechanical simulation of the central path.
This approach yields an algorithm for solving linear optimization problems involving $m$ constraints and $n$ variables to $varepsilon$-optimality.
In the standard gate model (i.e., without access to quantum RAM), our algorithm can obtain highly-precise solutions to LO problems using at most $$mathcalO left( sqrtm + n textsfnnz (A) fracR_1
arXiv Detail & Related papers (2023-11-07T13:26:20Z) - Gradient-free optimization of highly smooth functions: improved analysis
and a new algorithm [87.22224691317766]
This work studies problems with zero-order noisy oracle information under the assumption that the objective function is highly smooth.
We consider two kinds of zero-order projected gradient descent algorithms.
arXiv Detail & Related papers (2023-06-03T17:05:13Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z) - Optimal Gradient-based Algorithms for Non-concave Bandit Optimization [76.57464214864756]
This work considers a large family of bandit problems where the unknown underlying reward function is non-concave.
Our algorithms are based on a unified zeroth-order optimization paradigm that applies in great generality.
We show that the standard optimistic algorithms are sub-optimal by dimension factors.
arXiv Detail & Related papers (2021-07-09T16:04:24Z) - Mean Field Approximation for solving QUBO problems [0.0]
We show that the statistical physics approach and the quantum mechanical approach in the mean field annealing give the same result.
Our methods consist of a set of simple gradient-based minimizations with continuous variables, thus easy to simulate.
arXiv Detail & Related papers (2021-06-06T20:35:28Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z) - Projected Stochastic Gradient Langevin Algorithms for Constrained
Sampling and Non-Convex Learning [0.0]
Langevin algorithms are methods with additive noise.
Langevin algorithms have been used for decades in chain Carlo (Milon)
For learning, we show that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that
arXiv Detail & Related papers (2020-12-22T16:19:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.