Variable Shift SDD: A More Succinct Sentential Decision Diagram
- URL: http://arxiv.org/abs/2004.02502v1
- Date: Mon, 6 Apr 2020 09:18:19 GMT
- Title: Variable Shift SDD: A More Succinct Sentential Decision Diagram
- Authors: Kengo Nakamura, Shuhei Denzumi, Masaaki Nishino
- Abstract summary: Sentential Decision Diagram (SDD) is a tractable representation of Boolean functions.
We propose a more succinct variant of SDD named Variable Shift SDD (VS-SDD)
We show that VS-SDDs are never larger than SDDs and there are cases in which the size of a VS-SDD is exponentially smaller than that of an SDD.
- Score: 10.91026094237478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Sentential Decision Diagram (SDD) is a tractable representation of
Boolean functions that subsumes the famous Ordered Binary Decision Diagram
(OBDD) as a strict subset. SDDs are attracting much attention because they are
more succinct than OBDDs, as well as having canonical forms and supporting many
useful queries and transformations such as model counting and Apply operation.
In this paper, we propose a more succinct variant of SDD named Variable Shift
SDD (VS-SDD). The key idea is to create a unique representation for Boolean
functions that are equivalent under a specific variable substitution. We show
that VS-SDDs are never larger than SDDs and there are cases in which the size
of a VS-SDD is exponentially smaller than that of an SDD. Moreover, despite
such succinctness, we show that numerous basic operations that are supported in
polytime with SDD are also supported in polytime with VS-SDD. Experiments
confirm that VS-SDDs are significantly more succinct than SDDs when applied to
classical planning instances, where inherent symmetry exists.
Related papers
- The Last Mile to Supervised Performance: Semi-Supervised Domain Adaptation for Semantic Segmentation [51.77968964691317]
We study the promising setting of Semi-Supervised Domain Adaptation (SSDA)
We propose a simple SSDA framework that combines consistency regularization, pixel contrastive learning, and self-training to effectively utilize a few target-domain labels.
Our method outperforms prior art in the popular GTA-to-Cityscapes benchmark and shows that as little as 50 target labels can suffice to achieve near-supervised performance.
arXiv Detail & Related papers (2024-11-27T20:07:42Z) - Add-SD: Rational Generation without Manual Reference [83.01349699374524]
We introduce an instruction-based object addition pipeline, named Add-SD, which automatically inserts objects into realistic scenes with rational sizes and positions.
Our work contributes in three aspects: proposing a dataset containing numerous instructed image pairs; fine-tuning a diffusion model for rational generation; and generating synthetic data to boost downstream tasks.
arXiv Detail & Related papers (2024-07-30T17:58:13Z) - Canonical Decision Diagrams Modulo Theories [0.19285000127136376]
Decision diagrams are powerful tools to represent propositional formulas.
Some forms of DDs (e.g., OBDDs, SDDs) are canonical, that is, (under given conditions on the atom list) they univocally represent equivalence classes of formulas.
We present a novel technique to leverage DDs to SMT level, which has several advantages.
arXiv Detail & Related papers (2024-04-25T09:34:49Z) - VSCode: General Visual Salient and Camouflaged Object Detection with 2D Prompt Learning [104.74705190239119]
We introduce VSCode, a model with novel 2D prompt learning to jointly address four SOD tasks and three COD tasks.
We utilize VST as the foundation model and introduce 2D prompts within the encoder-decoder architecture to learn domain and task-specific knowledge.
VSCode outperforms state-of-the-art methods across six tasks on 26 datasets.
arXiv Detail & Related papers (2023-11-25T12:34:02Z) - Variants of Tagged Sentential Decision Diagrams [6.891570850234007]
A recently proposed canonical form of Boolean functions, namely tagged sentential decision diagrams (TSDDs), exploits both the standard and zero-suppressed trimming rules.
In this paper, we present a variant of TSDDs which we call standard TSDDs (STSDDs) by reversing the order of trimming rules.
We then prove the canonicity of STSDDs and present the algorithms for binary operations on TSDDs.
arXiv Detail & Related papers (2023-11-16T12:29:25Z) - Exploiting Asymmetry in Logic Puzzles: Using ZDDs for Symbolic Model
Checking Dynamic Epistemic Logic [0.0]
We use ZDDs to symbolically encode Kripke models used in Dynamic Epistemic Logic.
We show that replacing BDDs with the right variant of ZDDs can significantly reduce memory usage.
This suggests that ZDDs are a useful tool for model checking multi-agent systems.
arXiv Detail & Related papers (2023-07-11T07:13:09Z) - Weighted Context-Free-Language Ordered Binary Decision Diagrams [9.483290784772011]
This paper presents a new data structure, called emphWeighted Context-Free-Language Ordered BDDs (WCFLOBDDs)
For some functions, WCFLOBDDs are exponentially more succinct than WBDDs.
We apply WCFLOBDDs in quantum-circuit simulation, and find that they perform better than WBDDs on certain benchmarks.
arXiv Detail & Related papers (2023-05-23T02:16:52Z) - Belief Revision in Sentential Decision Diagrams [126.94029917018733]
We develop a general revision algorithm for SDDs based on a syntactic characterisation of Dalal revision.
Preliminary experiments performed with randomly generated knowledge bases show the advantages of directly perform revision within SDD formalism.
arXiv Detail & Related papers (2022-01-20T11:01:41Z) - Meta-Learning with Variational Semantic Memory for Word Sense
Disambiguation [56.830395467247016]
We propose a model of semantic memory for WSD in a meta-learning setting.
Our model is based on hierarchical variational inference and incorporates an adaptive memory update rule via a hypernetwork.
We show our model advances the state of the art in few-shot WSD, supports effective learning in extremely data scarce scenarios.
arXiv Detail & Related papers (2021-06-05T20:40:01Z) - Simple and Effective Prevention of Mode Collapse in Deep One-Class
Classification [93.2334223970488]
We propose two regularizers to prevent hypersphere collapse in deep SVDD.
The first regularizer is based on injecting random noise via the standard cross-entropy loss.
The second regularizer penalizes the minibatch variance when it becomes too small.
arXiv Detail & Related papers (2020-01-24T03:44:47Z) - From Open Set to Closed Set: Supervised Spatial Divide-and-Conquer for
Object Counting [84.23313278891568]
We introduce the idea of spatial divide-and-Conquer Network (SS-DCNet) that transforms open-set counting into a closed-set problem.
SS-DCNet can only learn from a closed set but generalize well to open-set scenarios via S-DC.
We provide theoretical analyses as well as a controlled experiment on toy data, demonstrating why closed-set modeling makes sense.
arXiv Detail & Related papers (2020-01-07T04:36:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.