Ordered Functional Decision Diagrams: A Functional Semantics For Binary
Decision Diagrams
- URL: http://arxiv.org/abs/2003.09340v4
- Date: Tue, 21 Jul 2020 22:09:07 GMT
- Title: Ordered Functional Decision Diagrams: A Functional Semantics For Binary
Decision Diagrams
- Authors: Joan Thibault and Khalil Ghorbal
- Abstract summary: We introduce a novel framework, termed $lambda$DD, that revisits Binary Decision Diagrams from a purely functional point of view.
We enumerate, in a principled way, all the models of this class and isolate its most expressive model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a novel framework, termed $\lambda$DD, that revisits Binary
Decision Diagrams from a purely functional point of view. The framework allows
to classify the already existing variants, including the most recent ones like
Chain-DD and ESRBDD, as implementations of a special class of ordered models.
We enumerate, in a principled way, all the models of this class and isolate its
most expressive model. This new model, termed $\lambda$DD-O-NUCX, is suitable
for both dense and sparse Boolean functions, and is moreover invariant by
negation. The canonicity of $\lambda$DD-O-NUCX is formally verified using the
Coq proof assistant. We furthermore give bounds on the size of the different
diagrams: the potential gain achieved by more expressive models can be at most
linear in the number of variables n.
Related papers
- COrAL: Order-Agnostic Language Modeling for Efficient Iterative Refinement [80.18490952057125]
Iterative refinement has emerged as an effective paradigm for enhancing the capabilities of large language models (LLMs) on complex tasks.
We propose Context-Wise Order-Agnostic Language Modeling (COrAL) to overcome these challenges.
Our approach models multiple token dependencies within manageable context windows, enabling the model to perform iterative refinement internally.
arXiv Detail & Related papers (2024-10-12T23:56:19Z) - Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models [63.36637269634553]
We present a novel method of further improving performance by requiring models to compare multiple reasoning chains.
We find that instruction tuning on DCoT datasets boosts the performance of even smaller, and therefore more accessible, language models.
arXiv Detail & Related papers (2024-07-03T15:01:18Z) - AlloyASG: Alloy Predicate Code Representation as a Compact Structurally Balanced Graph [0.6445605125467574]
We introduce a novel code representation schema, Complex Structurally Balanced Abstract Semantic Graph (CSBASG)
CSBASG represents code as a complex-weighted directed graph that lists a semantic element as a node in the graph.
We evaluate the efficiency of our CSBASG representation for Alloy models in terms of it's compactness compared to ASTs.
arXiv Detail & Related papers (2024-02-29T22:41:09Z) - Discrete Graph Auto-Encoder [52.50288418639075]
We introduce a new framework named Discrete Graph Auto-Encoder (DGAE)
We first use a permutation-equivariant auto-encoder to convert graphs into sets of discrete latent node representations.
In the second step, we sort the sets of discrete latent representations and learn their distribution with a specifically designed auto-regressive model.
arXiv Detail & Related papers (2023-06-13T12:40:39Z) - DORE: Document Ordered Relation Extraction based on Generative Framework [56.537386636819626]
This paper investigates the root cause of the underwhelming performance of the existing generative DocRE models.
We propose to generate a symbolic and ordered sequence from the relation matrix which is deterministic and easier for model to learn.
Experimental results on four datasets show that our proposed method can improve the performance of the generative DocRE models.
arXiv Detail & Related papers (2022-10-28T11:18:10Z) - Model Selection in Reinforcement Learning with General Function
Approximations [10.97775622611135]
We consider model selection for Reinforcement Learning environments -- Multi Armed Bandits (MABs) and Markov Decision Processes (MDPs)
In the model selection framework, we do not know the function classes, denoted by $mathcalF$ and $mathcalM$, where the true models lie.
We show that the cumulative regret of our adaptive algorithms match to that of an oracle which knows the correct function classes.
arXiv Detail & Related papers (2022-07-06T21:52:07Z) - Optimizing Binary Decision Diagrams with MaxSAT for classification [3.2894524838755608]
A growing interest in explainable artificial intelligence motivates the need for interpretable machine learning (ML) models.
Recently, several exact methods for computing such models are proposed to overcome weaknesses of traditional methods.
In this paper, we first propose SAT-based models for learning optimal Binary decision diagrams (BDDs)
Then, we lift the encoding to a MaxSAT model to learn optimal BDDs in limited depths.
Finally, we tackle the fragmentation problem by introducing a method to merge compatible subtrees for the BDDs found via the MaxSAT model.
arXiv Detail & Related papers (2022-03-21T23:17:37Z) - Exploring Dynamic Selection of Branch Expansion Orders for Code
Generation [43.027059412639]
We equip the Seq2Tree model with a context-based Branch Selector.
The selector is able to dynamically determine optimal expansion orders of branches for multi-branch nodes.
Experimental results and in-depth analysis on several commonly-used datasets demonstrate the effectiveness and generality of our approach.
arXiv Detail & Related papers (2021-06-01T06:52:41Z) - Improving Robustness and Generality of NLP Models Using Disentangled
Representations [62.08794500431367]
Supervised neural networks first map an input $x$ to a single representation $z$, and then map $z$ to the output label $y$.
We present methods to improve robustness and generality of NLP models from the standpoint of disentangled representation learning.
We show that models trained with the proposed criteria provide better robustness and domain adaptation ability in a wide range of supervised learning tasks.
arXiv Detail & Related papers (2020-09-21T02:48:46Z) - Particle-Gibbs Sampling For Bayesian Feature Allocation Models [77.57285768500225]
Most widely used MCMC strategies rely on an element wise Gibbs update of the feature allocation matrix.
We have developed a Gibbs sampler that can update an entire row of the feature allocation matrix in a single move.
This sampler is impractical for models with a large number of features as the computational complexity scales exponentially in the number of features.
We develop a Particle Gibbs sampler that targets the same distribution as the row wise Gibbs updates, but has computational complexity that only grows linearly in the number of features.
arXiv Detail & Related papers (2020-01-25T22:11:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.