Testing Unsatisfiability of Constraint Satisfaction Problems via Tensor
Products
- URL: http://arxiv.org/abs/2002.03766v1
- Date: Fri, 31 Jan 2020 18:06:52 GMT
- Title: Testing Unsatisfiability of Constraint Satisfaction Problems via Tensor
Products
- Authors: Daya Gaur and Muhammad Khan
- Abstract summary: We show how to use the tensor decomposition to compute a proof of unsatisfiability efficiently and in parallel.
One decomposition yields proofs of unsatisfiability in half the time without sacrificing the quality.
Our method is applicable to arbitrary CSPs using the well known dual and hidden variable transformations from an arbitrary CSP to a binary CSP.
- Score: 0.8528384027684192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the design of stochastic local search methods to prove
unsatisfiability of a constraint satisfaction problem (CSP). For a binary CSP,
such methods have been designed using the microstructure of the CSP. Here, we
develop a method to decompose the microstructure into graph tensors. We show
how to use the tensor decomposition to compute a proof of unsatisfiability
efficiently and in parallel. We also offer substantial empirical evidence that
our approach improves the praxis. For instance, one decomposition yields proofs
of unsatisfiability in half the time without sacrificing the quality. Another
decomposition is twenty times faster and effective three-tenths of the times
compared to the prior method. Our method is applicable to arbitrary CSPs using
the well known dual and hidden variable transformations from an arbitrary CSP
to a binary CSP.
Related papers
- Efficiently Escaping Saddle Points for Non-Convex Policy Optimization [40.0986936439803]
Policy gradient (PG) is widely used in reinforcement learning due to its scalability and good performance.
We propose a variance-reduced second-order method that uses second-order information in the form of Hessian vector products (HVP) and converges to an approximate second-order stationary point (SOSP) with sample complexity of $tildeO(epsilon-3)$.
arXiv Detail & Related papers (2023-11-15T12:36:45Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Cyclic Block Coordinate Descent With Variance Reduction for Composite
Nonconvex Optimization [26.218670461973705]
We propose methods for solving problems coordinate non-asymptotic gradient norm guarantees.
Our results demonstrate the efficacy of the proposed cyclic-reduced scheme in training deep neural nets.
arXiv Detail & Related papers (2022-12-09T19:17:39Z) - Incremental Updates of Generalized Hypertree Decompositions [7.082768797784365]
We make the first steps toward solving the problem of updating the decomposition of a CSP $P'$ so that it becomes a valid decomposition of a new CSP $P'$ produced by some modification of $P'$.
Even though the problem is hard in theory, we propose and implement a framework for effectively updating GHDs.
arXiv Detail & Related papers (2022-09-21T14:12:16Z) - Finite-rate sparse quantum codes aplenty [1.1279808969568252]
We introduce a methodology for generating random multi-qubit stabilizer codes based on solving a constraint satisfaction problem (CSP) on random bipartite graphs.
Using a state-of-the-art CSP solver, we obtain convincing evidence for the existence of a satisfiability threshold.
We observe that the sparse codes found in the satisfiable phase practically achieve the channel capacity for erasure noise.
arXiv Detail & Related papers (2022-07-07T20:39:39Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Recursive Causal Structure Learning in the Presence of Latent Variables
and Selection Bias [27.06618125828978]
We consider the problem of learning the causal MAG of a system from observational data in the presence of latent variables and selection bias.
We propose a novel computationally efficient constraint-based method that is sound and complete.
We provide experimental results to compare the proposed approach with the state of the art on both synthetic and real-world structures.
arXiv Detail & Related papers (2021-10-22T19:49:59Z) - A Bregman Method for Structure Learning on Sparse Directed Acyclic
Graphs [84.7328507118758]
We develop a Bregman proximal gradient method for structure learning.
We measure the impact of curvature against a highly nonlinear iteration.
We test our method on various synthetic and real sets.
arXiv Detail & Related papers (2020-11-05T11:37:44Z) - Large-Scale Methods for Distributionally Robust Optimization [53.98643772533416]
We prove that our algorithms require a number of evaluations gradient independent of training set size and number of parameters.
Experiments on MNIST and ImageNet confirm the theoretical scaling of our algorithms, which are 9--36 times more efficient than full-batch methods.
arXiv Detail & Related papers (2020-10-12T17:41:44Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient
Clipping [69.9674326582747]
We propose a new accelerated first-order method called clipped-SSTM for smooth convex optimization with heavy-tailed distributed noise in gradients.
We prove new complexity that outperform state-of-the-art results in this case.
We derive the first non-trivial high-probability complexity bounds for SGD with clipping without light-tails assumption on the noise.
arXiv Detail & Related papers (2020-05-21T17:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.