Quantum-Inspired Approximations to Constraint Satisfaction Problems
- URL: http://arxiv.org/abs/2212.04016v1
- Date: Thu, 8 Dec 2022 00:40:56 GMT
- Title: Quantum-Inspired Approximations to Constraint Satisfaction Problems
- Authors: S. Andrew Lanham
- Abstract summary: This paper presents new algorithms for satisfying configurations using methods from Boolean Fourier analysis.
The approach is broadly inspired by the quantum amplitude amplification algorithm.
We demonstrate that satisfying solutions may be retrieved in a process analogous to quantum measurement made efficient by sparsity in the Fourier domain.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Two contrasting algorithmic paradigms for constraint satisfaction problems
are successive local explorations of neighboring configurations versus
producing new configurations using global information about the problem (e.g.
approximating the marginals of the probability distribution which is uniform
over satisfying configurations). This paper presents new algorithms for the
latter framework, ultimately producing estimates for satisfying configurations
using methods from Boolean Fourier analysis. The approach is broadly inspired
by the quantum amplitude amplification algorithm in that it maximally increases
the amplitude of the approximation function over satisfying configurations
given sequential refinements. We demonstrate that satisfying solutions may be
retrieved in a process analogous to quantum measurement made efficient by
sparsity in the Fourier domain, and present a complete solver construction
using this novel approximation. Freedom in the refinement strategy invites
further opportunities to design solvers in an evolutionary computing framework.
Results demonstrate competitive performance against local solvers for the
Boolean satisfiability (SAT) problem, encouraging future work in understanding
the connections between Boolean Fourier analysis and constraint satisfaction.
Related papers
- A neural network approach for solving the Monge-Ampère equation with transport boundary condition [0.0]
This paper introduces a novel neural network-based approach to solving the Monge-Ampere equation with the transport boundary condition.
We leverage multilayer perceptron networks to learn approximate solutions by minimizing a loss function that encompasses the equation's residual, boundary conditions, and convexity constraints.
arXiv Detail & Related papers (2024-10-25T11:54:00Z) - Analysis of the Non-variational Quantum Walk-based Optimisation Algorithm [0.0]
This paper introduces in detail a non-variational quantum algorithm designed to solve a wide range of optimisation problems.
The algorithm returns optimal and near-optimal solutions from repeated preparation and measurement of an amplified state.
arXiv Detail & Related papers (2024-07-29T13:54:28Z) - Hybrid discrete-continuous compilation of trapped-ion quantum circuits with deep reinforcement learning [1.7087507417780985]
We show that we can significantly reduce the size of relevant quantum circuits for trapped-ion computing.
Our framework can also be applied to an experimental setup whose goal is to reproduce an unknown unitary process.
arXiv Detail & Related papers (2023-07-12T14:55:28Z) - Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and
Optimal Algorithms [64.10576998630981]
We show the first tight characterization of the optimal Hessian-dependent sample complexity.
A Hessian-independent algorithm universally achieves the optimal sample complexities for all Hessian instances.
The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions.
arXiv Detail & Related papers (2023-06-21T17:03:22Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Stochastic Compositional Gradient Descent under Compositional
constraints [13.170519806372075]
We study constrained optimization problems where the objective and constraint functions are convex and expressed as compositions of functions.
The problem arises in fair classification/regression and in the design of queuing systems.
We prove that the proposed algorithm is guaranteed to find the optimal and feasible solution almost surely.
arXiv Detail & Related papers (2020-12-17T05:38:37Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - Constrained Combinatorial Optimization with Reinforcement Learning [0.30938904602244344]
This paper presents a framework to tackle constrained optimization problems using deep Reinforcement Learning (RL)
We extend the Neural Combinatorial Optimization (NCO) theory in order to deal with constraints in its formulation.
In that context, the solution is iteratively constructed based on interactions with the environment.
arXiv Detail & Related papers (2020-06-22T03:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.