Approximate optimization of MAXCUT with a local spin algorithm
- URL: http://arxiv.org/abs/2008.06054v1
- Date: Thu, 13 Aug 2020 18:00:00 GMT
- Title: Approximate optimization of MAXCUT with a local spin algorithm
- Authors: Aniruddha Bapat and Stephen P. Jordan
- Abstract summary: Local tensor methods are a class of optimization algorithms introduced in [Hastings,arXiv:1905.07047v2].
We investigate performance in practice through benchmarking experiments on instances of the MAXCUT problem.
We find time to solution achieved by the local tensor method is highly uncorrelated with that achieved by a widely used commercial optimization package.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Local tensor methods are a class of optimization algorithms that was
introduced in [Hastings,arXiv:1905.07047v2][1] as a classical analogue of the
quantum approximate optimization algorithm (QAOA). These algorithms treat the
cost function as a Hamiltonian on spin degrees of freedom and simulate the
relaxation of the system to a low energy configuration using local update rules
on the spins. Whereas the emphasis in [1] was on theoretical worst-case
analysis, we here investigate performance in practice through benchmarking
experiments on instances of the MAXCUT problem.Through heuristic arguments we
propose formulas for choosing the hyperparameters of the algorithm which are
found to be in good agreement with the optimal choices determined from
experiment. We observe that the local tensor method is closely related to
gradient descent on a relaxation of maxcut to continuous variables, but
consistently outperforms gradient descent in all instances tested. We find time
to solution achieved by the local tensor method is highly uncorrelated with
that achieved by a widely used commercial optimization package; on some MAXCUT
instances the local tensor method beats the commercial solver in time to
solution by up to two orders of magnitude and vice-versa. Finally, we argue
that the local tensor method closely follows discretized, imaginary-time
dynamics of the system under the problem Hamiltonian.
Related papers
- Dynamic Anisotropic Smoothing for Noisy Derivative-Free Optimization [0.0]
We propose a novel algorithm that extends the methods of ball smoothing and Gaussian smoothing for noisy derivative-free optimization.
The algorithm dynamically adapts the shape of the smoothing kernel to approximate the Hessian of the objective function around a local optimum.
arXiv Detail & Related papers (2024-05-02T21:04:20Z) - PROMISE: Preconditioned Stochastic Optimization Methods by Incorporating Scalable Curvature Estimates [17.777466668123886]
We introduce PROMISE ($textbfPr$econditioned $textbfO$ptimization $textbfM$ethods by $textbfI$ncorporating $textbfS$calable Curvature $textbfE$stimates), a suite of sketching-based preconditioned gradient algorithms.
PROMISE includes preconditioned versions of SVRG, SAGA, and Katyusha.
arXiv Detail & Related papers (2023-09-05T07:49:10Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Convergence Rates of Two-Time-Scale Gradient Descent-Ascent Dynamics for
Solving Nonconvex Min-Max Problems [2.0305676256390934]
We characterize the finite-time performance of the continuous-time variant of simultaneous gradient descent-ascent algorithm.
Our results on the behavior of continuous-time algorithm may be used to enhance the convergence properties of its discrete-time counterpart.
arXiv Detail & Related papers (2021-12-17T15:51:04Z) - Byzantine-Resilient Non-Convex Stochastic Gradient Descent [61.6382287971982]
adversary-resilient distributed optimization, in which.
machines can independently compute gradients, and cooperate.
Our algorithm is based on a new concentration technique, and its sample complexity.
It is very practical: it improves upon the performance of all prior methods when no.
setting machines are present.
arXiv Detail & Related papers (2020-12-28T17:19:32Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.