Sparse Hashing for Scalable Approximate Model Counting: Theory and
Practice
- URL: http://arxiv.org/abs/2004.14692v1
- Date: Thu, 30 Apr 2020 11:17:26 GMT
- Title: Sparse Hashing for Scalable Approximate Model Counting: Theory and
Practice
- Authors: Kuldeep S. Meel and S. Akshay
- Abstract summary: Given a CNF formula F on n variables, the problem of model counting or #SAT is to compute the number of satisfying assignments of F.
Recent years have witnessed a surge of effort towards developing efficient algorithmic techniques.
- Score: 36.8421113576893
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a CNF formula F on n variables, the problem of model counting or #SAT
is to compute the number of satisfying assignments of F . Model counting is a
fundamental but hard problem in computer science with varied applications.
Recent years have witnessed a surge of effort towards developing efficient
algorithmic techniques that combine the classical 2-universal hashing with the
remarkable progress in SAT solving over the past decade. These techniques
augment the CNF formula F with random XOR constraints and invoke an NP oracle
repeatedly on the resultant CNF-XOR formulas. In practice, calls to the NP
oracle calls are replaced a SAT solver whose runtime performance is adversely
affected by size of XOR constraints. The standard construction of 2-universal
hash functions chooses every variable with probability p = 1/2 leading to XOR
constraints of size n/2 in expectation. Consequently, the challenge is to
design sparse hash functions where variables can be chosen with smaller
probability and lead to smaller sized XOR constraints.
In this paper, we address this challenge from theoretical and practical
perspectives. First, we formalize a relaxation of universal hashing, called
concentrated hashing and establish a novel and beautiful connection between
concentration measures of these hash functions and isoperimetric inequalities
on boolean hypercubes. This allows us to obtain (log m) tight bounds on
variance and dispersion index and show that p = O( log(m)/m ) suffices for
design of sparse hash functions from {0, 1}^n to {0, 1}^m. We then use sparse
hash functions belonging to this concentrated hash family to develop new
approximate counting algorithms. A comprehensive experimental evaluation of our
algorithm on 1893 benchmarks demonstrates that usage of sparse hash functions
can lead to significant speedups.
Related papers
- A compact QUBO encoding of computational logic formulae demonstrated on cryptography constructions [0.0]
We aim to advance the state-of-the-art in Quadratic Unconstrained Binary Optimization formulation with a focus on cryptography algorithms.
We consider the most widespread cryptography algorithms including AES-128/192/256, MD5, SHA1 and SHA256.
For each of these, we achieved QUBO instances reduced by thousands of logical variables compared to previously published results, while keeping the QUBO matrix sparse and the magnitude of the coefficients low.
arXiv Detail & Related papers (2024-09-10T18:46:26Z) - Sum-of-Squares inspired Quantum Metaheuristic for Polynomial Optimization with the Hadamard Test and Approximate Amplitude Constraints [76.53316706600717]
Recently proposed quantum algorithm arXiv:2206.14999 is based on semidefinite programming (SDP)
We generalize the SDP-inspired quantum algorithm to sum-of-squares.
Our results show that our algorithm is suitable for large problems and approximate the best known classicals.
arXiv Detail & Related papers (2024-08-14T19:04:13Z) - Scalable network reconstruction in subquadratic time [0.0]
We present a general algorithm applicable to a broad range of reconstruction problems that significantly outperforms this quadratic baseline.
Our algorithm relies on a second neighbor search that produces the best edge candidates with high probability.
We show that our algorithm achieves a performance that is many orders of magnitude faster than the quadratic baseline.
arXiv Detail & Related papers (2024-01-02T19:00:13Z) - When can you trust feature selection? -- I: A condition-based analysis
of LASSO and generalised hardness of approximation [49.1574468325115]
We show how no (randomised) algorithm can determine the correct support sets (with probability $> 1/2$) of minimisers of LASSO when reading approximate input.
For ill-posed inputs, the algorithm runs forever, hence, it will never produce a wrong answer.
For any algorithm defined on an open set containing a point with infinite condition number, there is an input for which the algorithm will either run forever or produce a wrong answer.
arXiv Detail & Related papers (2023-12-18T18:29:01Z) - Quantum Sparse Coding [5.130440339897477]
We develop a quantum-inspired algorithm for sparse coding.
The emergence of quantum computers and Ising machines can potentially lead to more accurate estimations.
We conduct numerical experiments with simulated data on Lightr's quantum-inspired digital platform.
arXiv Detail & Related papers (2022-09-08T13:00:30Z) - Quantum Goemans-Williamson Algorithm with the Hadamard Test and
Approximate Amplitude Constraints [62.72309460291971]
We introduce a variational quantum algorithm for Goemans-Williamson algorithm that uses only $n+1$ qubits.
Efficient optimization is achieved by encoding the objective matrix as a properly parameterized unitary conditioned on an auxilary qubit.
We demonstrate the effectiveness of our protocol by devising an efficient quantum implementation of the Goemans-Williamson algorithm for various NP-hard problems.
arXiv Detail & Related papers (2022-06-30T03:15:23Z) - Quantum Topological Data Analysis with Linear Depth and Exponential
Speedup [9.820545418277305]
We completely overhaul the QTDA algorithm to achieve an improved exponential speedup depth complexity of $O(n4/(epsilon2 delta)).
We present a theoretical error analysis, and the circuit and computational time and depth complexities for Betti number estimation.
arXiv Detail & Related papers (2021-08-05T18:56:17Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Berrut Approximated Coded Computing: Straggler Resistance Beyond
Polynomial Computing [34.69732430310801]
We propose Berrut Approximated Coded Computing (BACC) as an alternative approach to deal with stragglers effect.
BACC is proven to be numerically stable with low computational complexity.
In particular, BACC is used to train a deep neural network on a cluster of servers.
arXiv Detail & Related papers (2020-09-17T14:23:38Z) - Learning to Accelerate Heuristic Searching for Large-Scale Maximum
Weighted b-Matching Problems in Online Advertising [51.97494906131859]
Bipartite b-matching is fundamental in algorithm design, and has been widely applied into economic markets, labor markets, etc.
Existing exact and approximate algorithms usually fail in such settings due to either requiring intolerable running time or too much computation resource.
We propose textttNeuSearcher which leverages the knowledge learned from previously instances to solve new problem instances.
arXiv Detail & Related papers (2020-05-09T02:48:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.