A Lower Bound on DNNF Encodings of Pseudo-Boolean Constraints
- URL: http://arxiv.org/abs/2101.01953v1
- Date: Wed, 6 Jan 2021 10:25:22 GMT
- Title: A Lower Bound on DNNF Encodings of Pseudo-Boolean Constraints
- Authors: Alexis de Colnet
- Abstract summary: Two major considerations when encoding pseudo-Boolean constraints into SAT are the size of the encoding and its propagation strength.
It has been shown that there exist PB-constraints whose ordered BDD (OBDD) representations, and thus the inferred CNF encodings, all have exponential size.
- Score: 3.42658286826597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Two major considerations when encoding pseudo-Boolean (PB) constraints into
SAT are the size of the encoding and its propagation strength, that is, the
guarantee that it has a good behaviour under unit propagation. Several
encodings with propagation strength guarantees rely upon prior compilation of
the constraints into DNNF (decomposable negation normal form), BDD (binary
decision diagram), or some other sub-variants. However it has been shown that
there exist PB-constraints whose ordered BDD (OBDD) representations, and thus
the inferred CNF encodings, all have exponential size. Since DNNFs are more
succinct than OBDDs, preferring encodings via DNNF to avoid size explosion
seems a legitimate choice. Yet in this paper, we prove the existence of
PB-constraints whose DNNFs all require exponential size.
Related papers
- Clip-and-Verify: Linear Constraint-Driven Domain Clipping for Accelerating Neural Network Verification [9.733843486234878]
State-of-the-art neural network (NN) verifiers demonstrate that applying the branch-and-bound (BaB) procedure with fast bounding techniques plays a key role in tackling many challenging verification properties.<n>We introduce the linear constraint-driven clipping framework, a class of scalable and efficient methods designed to enhance the efficacy of NN verifiers.
arXiv Detail & Related papers (2025-12-11T19:59:37Z) - Scalable Verification of Neural Control Barrier Functions Using Linear Bound Propagation [50.53301323864253]
Control barrier functions (CBFs) are a popular tool for safety certification of nonlinear dynamical control systems.<n>We present a novel framework for verifying neural CBFs based on piecewise linear upper and lower bounds on the conditions required for a neural network to be a CBF.<n>Our approach scales to larger neural networks than state-of-the-art verification procedures for CBFs.
arXiv Detail & Related papers (2025-11-09T11:51:15Z) - Toward Uncertainty-Aware and Generalizable Neural Decoding for Quantum LDPC Codes [0.9453554184019106]
Quantum error correction (QEC) is essential for scalable quantum computing.<n>We propose textbfQuBA, a Bayesian graph neural decoder that integrates attention to both dot-product and multi-head.<n>We further develop textbfSAGU textbf(Sequential Aggregate Generalization under Uncertainty), a multi-code training framework with enhanced cross-domain robustness.
arXiv Detail & Related papers (2025-10-05T01:08:39Z) - Compilation and Fast Model Counting beyond CNF [31.620928650659586]
This paper strengthens our theoretical knowledge of what classes of functions can be efficiently transformed, or compiled, into d-DNNF.
The constraints in question are all functions representable by constant-width ordered binary decision diagrams (OBDDs) for all variable orderings.
We give a more efficient FPT algorithm for model counting that applies to a sub-family of the constraints and does not require compilation.
arXiv Detail & Related papers (2025-02-01T14:00:04Z) - A shallow dive into the depths of non-termination checking for C programs [3.4144415576897624]
Unintended Non-Termination (NT) is common in real-world software development.
We propose a sound and efficient technique for NT checking that is also effective on realworld software.
Our experiments on wide ranging software benchmarks show that the technique outperforms state-of-the-art NT checkers.
arXiv Detail & Related papers (2024-09-04T22:47:46Z) - Approximate quantum error correcting codes from conformal field theory [0.0]
We consider generic 1+1D CFT codes under extensive local dephasing channels.
We show that a CFT code with continuous symmetry saturates a bound on the recovery fidelity for covariant codes.
arXiv Detail & Related papers (2024-06-13T19:40:36Z) - Structured d-DNNF Is Not Closed Under Negation [0.0]
Both structured d-DNNF and SDD can be exponentially more succinct than OBDD.
We show that structured d-DNNF does not support polytime negation, disjunction, or existential operations.
We also show that there are functions with an equivalent-sized structured d-DNNF but with no such representation as an SDD.
arXiv Detail & Related papers (2024-02-07T13:31:59Z) - Belief Propagation Decoding of Quantum LDPC Codes with Guided Decimation [55.8930142490617]
We propose a decoder for QLDPC codes based on BP guided decimation (BPGD)
BPGD significantly reduces the BP failure rate due to non-convergence.
arXiv Detail & Related papers (2023-12-18T05:58:07Z) - Tokenization and the Noiseless Channel [71.25796813073399]
Good tokenizers lead to emphefficient channel usage, where the channel is the means by which some input is conveyed to the model.
In machine translation, we find that across multiple tokenizers, the R'enyi entropy with $alpha = 2.5$ has a very strong correlation with textscBleu: $0.78$ in comparison to just $-0.32$ for compressed length.
arXiv Detail & Related papers (2023-06-29T10:32:09Z) - Compacting Binary Neural Networks by Sparse Kernel Selection [58.84313343190488]
This paper is motivated by a previously revealed phenomenon that the binary kernels in successful BNNs are nearly power-law distributed.
We develop the Permutation Straight-Through Estimator (PSTE) that is able to not only optimize the selection process end-to-end but also maintain the non-repetitive occupancy of selected codewords.
Experiments verify that our method reduces both the model size and bit-wise computational costs, and achieves accuracy improvements compared with state-of-the-art BNNs under comparable budgets.
arXiv Detail & Related papers (2023-03-25T13:53:02Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Understanding Deep Learning via Decision Boundary [81.49114762506287]
We show that the neural network with lower decision boundary (DB) variability has better generalizability.
Two new notions, algorithm DB variability and $(epsilon, eta)$-data DB variability, are proposed to measure the decision boundary variability.
arXiv Detail & Related papers (2022-06-03T11:34:12Z) - DPO: Dynamic-Programming Optimization on Hybrid Constraints [26.704502486686128]
In Bayesian inference, the most probable explanation (MPE) problem requests a variable instantiation with the highest probability given some evidence.
It is known that Boolean MPE can be solved via reduction to (weighted partial) MaxSAT.
We build on DPMC and introduce DPO, a dynamic-programming that exactly solves MPE.
arXiv Detail & Related papers (2022-05-17T21:18:54Z) - SAT Encodings for Pseudo-Boolean Constraints Together With At-Most-One
Constraints [5.62245362437303]
We study encodings of Pseudo-Boolean (PB) constraints, a common type of arithmetic constraint.
PB(AMO) encodings allow many more instances to be solved within a time limit, and solving time is improved by more than one order of magnitude in some cases.
arXiv Detail & Related papers (2021-10-15T12:53:01Z) - Improved Branch and Bound for Neural Network Verification via Lagrangian
Decomposition [161.09660864941603]
We improve the scalability of Branch and Bound (BaB) algorithms for formally proving input-output properties of neural networks.
We present a novel activation-based branching strategy and a BaB framework, named Branch and Dual Network Bound (BaDNB)
BaDNB outperforms previous complete verification systems by a large margin, cutting average verification times by factors up to 50 on adversarial properties.
arXiv Detail & Related papers (2021-04-14T09:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.