Structured d-DNNF Is Not Closed Under Negation
- URL: http://arxiv.org/abs/2402.04832v1
- Date: Wed, 7 Feb 2024 13:31:59 GMT
- Title: Structured d-DNNF Is Not Closed Under Negation
- Authors: Harry Vinall-Smeeth
- Abstract summary: Both structured d-DNNF and SDD can be exponentially more succinct than OBDD.
We show that structured d-DNNF does not support polytime negation, disjunction, or existential operations.
We also show that there are functions with an equivalent-sized structured d-DNNF but with no such representation as an SDD.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Both structured d-DNNF and SDD can be exponentially more succinct than OBDD.
Moreover, SDD is essentially as tractable as OBDD. But this has left two
important open questions. Firstly, does OBDD support more tractable
transformations than structured d-DNNF? And secondly, is structured d-DNNF more
succinct than SDD? In this paper, we answer both questions in the affirmative.
For the first question we show that, unlike OBDD, structured d-DNNF does not
support polytime negation, disjunction, or existential quantification
operations. As a corollary, we deduce that there are functions with an
equivalent polynomial-sized structured d-DNNF but with no such representation
as an SDD, thus answering the second question. We also lift this second result
to arithmetic circuits (AC) to show a succinctness gap between PSDD and the
monotone AC analogue to structured d-DNNF.
Related papers
- Redundancy-Driven Top-$k$ Functional Dependency Discovery [5.126404609788543]
Functional dependencies (FDs) are basic constraints in relational databases and are used for many data management tasks.<n>Most FD discovery algorithms find all valid dependencies, but this causes two problems.<n>We propose SDP (Selective-Discovery-and-Prune), which discovers the top-$k$ FDs ranked by redundancy count.
arXiv Detail & Related papers (2026-01-15T07:16:01Z) - Nes2Net: A Lightweight Nested Architecture for Foundation Model Driven Speech Anti-spoofing [56.53218228501566]
Nested Res2Net (Nes2Net) is a lightweight back-end architecture designed to directly process high-dimensional features without DR layers.
We report a 22% performance improvement and an 87% back-end computational cost reduction over the state-of-the-art baseline.
arXiv Detail & Related papers (2025-04-08T04:11:28Z) - Unsupervised Chunking with Hierarchical RNN [62.15060807493364]
This paper introduces an unsupervised approach to chunking, a syntactic task that involves grouping words in a non-hierarchical manner.
We present a two-layer Hierarchical Recurrent Neural Network (HRNN) designed to model word-to-chunk and chunk-to-sentence compositions.
Experiments on the CoNLL-2000 dataset reveal a notable improvement over existing unsupervised methods, enhancing phrase F1 score by up to 6 percentage points.
arXiv Detail & Related papers (2023-09-10T02:55:12Z) - Weighted Context-Free-Language Ordered Binary Decision Diagrams [9.483290784772011]
This paper presents a new data structure, called emphWeighted Context-Free-Language Ordered BDDs (WCFLOBDDs)
For some functions, WCFLOBDDs are exponentially more succinct than WBDDs.
We apply WCFLOBDDs in quantum-circuit simulation, and find that they perform better than WBDDs on certain benchmarks.
arXiv Detail & Related papers (2023-05-23T02:16:52Z) - Efficacy of noisy dynamical decoupling [0.0]
Dynamical decoupling (DD) refers to a well-established family of methods for error mitigation in quantum systems.
In the presence of noisy pulses, DD does not always mitigate errors.
It does so only when the added noise from the DD pulses do not outweigh the increased ability in averaging away the original background noise.
arXiv Detail & Related papers (2022-09-19T14:19:11Z) - Differentiable and Transportable Structure Learning [73.84540901950616]
We introduce D-Struct, which recovers transportability in the discovered structures through a novel architecture and loss function.
Because D-Struct remains differentiable, our method can be easily adopted in existing differentiable architectures.
arXiv Detail & Related papers (2022-06-13T17:50:53Z) - Belief Revision in Sentential Decision Diagrams [126.94029917018733]
We develop a general revision algorithm for SDDs based on a syntactic characterisation of Dalal revision.
Preliminary experiments performed with randomly generated knowledge bases show the advantages of directly perform revision within SDD formalism.
arXiv Detail & Related papers (2022-01-20T11:01:41Z) - Recursive Bayesian Networks: Generalising and Unifying Probabilistic
Context-Free Grammars and Dynamic Bayesian Networks [0.0]
Probabilistic context-free grammars (PCFGs) and dynamic Bayesian networks (DBNs) are widely used sequence models with complementary strengths and limitations.
We present Recursive Bayesian Networks (RBNs), which generalise and unify PCFGs and DBNs, combining their strengths and containing both as special cases.
arXiv Detail & Related papers (2021-11-02T19:21:15Z) - A Lower Bound on DNNF Encodings of Pseudo-Boolean Constraints [3.42658286826597]
Two major considerations when encoding pseudo-Boolean constraints into SAT are the size of the encoding and its propagation strength.
It has been shown that there exist PB-constraints whose ordered BDD (OBDD) representations, and thus the inferred CNF encodings, all have exponential size.
arXiv Detail & Related papers (2021-01-06T10:25:22Z) - Substructure Substitution: Structured Data Augmentation for NLP [55.69800855705232]
SUB2 generates new examples by substituting substructures with ones with the same label.
For more general tasks, we present variations of SUB2 based on constituency parse trees.
For most cases, training with the augmented dataset by SUB2 achieves better performance than training with the original training set.
arXiv Detail & Related papers (2021-01-02T09:54:24Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z) - Variable Shift SDD: A More Succinct Sentential Decision Diagram [10.91026094237478]
Sentential Decision Diagram (SDD) is a tractable representation of Boolean functions.
We propose a more succinct variant of SDD named Variable Shift SDD (VS-SDD)
We show that VS-SDDs are never larger than SDDs and there are cases in which the size of a VS-SDD is exponentially smaller than that of an SDD.
arXiv Detail & Related papers (2020-04-06T09:18:19Z) - Architecture Disentanglement for Deep Neural Networks [174.16176919145377]
We introduce neural architecture disentanglement (NAD) to explain the inner workings of deep neural networks (DNNs)
NAD learns to disentangle a pre-trained DNN into sub-architectures according to independent tasks, forming information flows that describe the inference processes.
Results show that misclassified images have a high probability of being assigned to task sub-architectures similar to the correct ones.
arXiv Detail & Related papers (2020-03-30T08:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.