Mean Field Approximation for solving QUBO problems
- URL: http://arxiv.org/abs/2106.03238v1
- Date: Sun, 6 Jun 2021 20:35:28 GMT
- Title: Mean Field Approximation for solving QUBO problems
- Authors: M\'at\'e Tibor Veszeli and G\'abor Vattay
- Abstract summary: We show that the statistical physics approach and the quantum mechanical approach in the mean field annealing give the same result.
Our methods consist of a set of simple gradient-based minimizations with continuous variables, thus easy to simulate.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Quadratic Unconstrained Binary Optimization (QUBO) problems are NP hard;
thus, so far, there are no algorithms to solve them efficiently. There are
exact methods like the Branch-and-Bound algorithm for smaller problems, and for
larger ones, many good approximations like stochastic simulated annealing for
discrete variables or the mean field annealing for continuous variables. This
paper will show that the statistical physics approach and the quantum
mechanical approach in the mean field annealing give the same result. We
examined the Ising problem, which is an alternative formulation of the QUBO
problem. Our methods consist of a set of simple gradient-based minimizations
with continuous variables, thus easy to simulate. We benchmarked our methods
with solving the Maximum Cut problem with the G-sets. In many graphs, we could
achieve the best-known Cut Value.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Quantum optimization algorithm based on multistep quantum computation [0.0]
We present a quantum algorithm for finding the minimum of a function based on multistep quantum computation.
In this algorithm, the dimension of the search space of the problem can be reduced exponentially step by step.
We have tested the algorithm for some continuous test functions.
arXiv Detail & Related papers (2023-06-30T01:58:23Z) - Numerical Approximation in CFD Problems Using Physics Informed Machine
Learning [0.0]
The thesis focuses on various techniques to find an alternate approximation method that could be universally used for a wide range of CFD problems.
The focus stays over physics informed machine learning techniques where solving differential equations is possible without any training with computed data.
The extreme learning machine (ELM) is a very fast neural network algorithm at the cost of tunable parameters.
arXiv Detail & Related papers (2021-11-01T22:54:51Z) - Converting ADMM to a Proximal Gradient for Convex Optimization Problems [4.56877715768796]
In sparse estimation, such as fused lasso and convex clustering, we apply either the proximal gradient method or the alternating direction method of multipliers (ADMM) to solve the problem.
This paper proposes a general method for converting the ADMM solution to the proximal gradient method, assuming that the constraints and objectives are strongly convex.
We show by numerical experiments that we can obtain a significant improvement in terms of efficiency.
arXiv Detail & Related papers (2021-04-22T07:41:12Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z) - Solving Inequality-Constrained Binary Optimization Problems on Quantum
Annealer [0.966840768820136]
We propose a new method for solving binary optimization problems under inequality constraints using a quantum annealer.
In this study, we employ the alternating direction method of multipliers.
We found that the computational time of our method is faster than that of the exact solver when we tackle various QKPs defined on dense graphs.
arXiv Detail & Related papers (2020-12-11T04:30:16Z) - Boosting Data Reduction for the Maximum Weight Independent Set Problem
Using Increasing Transformations [59.84561168501493]
We introduce new generalized data reduction and transformation rules for the maximum weight independent set problem.
Surprisingly, these so-called increasing transformations can simplify the problem and also open up the reduction space to yield even smaller irreducible graphs later in the algorithm.
Our algorithm computes significantly smaller irreducible graphs on all except one instance, solves more instances to optimality than previously possible, is up to two orders of magnitude faster than the best state-of-the-art solver, and finds higher-quality solutions than solvers DynWVC and HILS.
arXiv Detail & Related papers (2020-08-12T08:52:50Z) - A H\"olderian backtracking method for min-max and min-min problems [0.0]
We present a new algorithm to solve min-max or min-min problems out of the convex world.
We use rigidity assumptions, ubiquitous in learning, making our method applicable to many optimization problems.
arXiv Detail & Related papers (2020-07-17T08:12:31Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Hardness of Random Optimization Problems for Boolean Circuits,
Low-Degree Polynomials, and Langevin Dynamics [78.46689176407936]
We show that families of algorithms fail to produce nearly optimal solutions with high probability.
For the case of Boolean circuits, our results improve the state-of-the-art bounds known in circuit complexity theory.
arXiv Detail & Related papers (2020-04-25T05:45:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.