Learning Reliable Logical Rules with SATNet
- URL: http://arxiv.org/abs/2310.02133v1
- Date: Tue, 3 Oct 2023 15:14:28 GMT
- Title: Learning Reliable Logical Rules with SATNet
- Authors: Zhaoyu Li, Jinpei Guo, Yuhe Jiang, Xujie Si
- Abstract summary: We build on SATNet, a differentiable MaxSAT solver that learns the underlying rules from input-output examples.
We introduce several effective verification techniques to validate it against the ground truth rules.
Experiments on stream transformations and Sudoku problems show that our decoded rules are highly reliable.
- Score: 7.951021955925275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bridging logical reasoning and deep learning is crucial for advanced AI
systems. In this work, we present a new framework that addresses this goal by
generating interpretable and verifiable logical rules through differentiable
learning, without relying on pre-specified logical structures. Our approach
builds upon SATNet, a differentiable MaxSAT solver that learns the underlying
rules from input-output examples. Despite its efficacy, the learned weights in
SATNet are not straightforwardly interpretable, failing to produce
human-readable rules. To address this, we propose a novel specification method
called "maximum equality", which enables the interchangeability between the
learned weights of SATNet and a set of propositional logical rules in weighted
MaxSAT form. With the decoded weighted MaxSAT formula, we further introduce
several effective verification techniques to validate it against the ground
truth rules. Experiments on stream transformations and Sudoku problems show
that our decoded rules are highly reliable: using exact solvers on them could
achieve 100% accuracy, whereas the original SATNet fails to give correct
solutions in many cases. Furthermore, we formally verify that our decoded
logical rules are functionally equivalent to the ground truth ones.
Related papers
- Self-Satisfied: An end-to-end framework for SAT generation and prediction [0.7340017786387768]
We introduce hardware accelerated algorithms for fast SAT problem generation and a geometric SAT encoding.
These advances allow us to scale our approach to SAT problems with thousands of variables and tens of thousands of clauses.
A fundamental aspect of our work concerns the very nature of SAT data and its suitability for training machine learning models.
arXiv Detail & Related papers (2024-10-18T22:25:54Z) - Assessing SATNet's Ability to Solve the Symbol Grounding Problem [19.52506478494672]
SATNet is an award-winning MAXSAT solver that can be used to infer logical rules and integrated as a differentiable layer in a deep neural network.
We show that in the absence of intermediate labels that identify individual Sudoku digit images with their logical representations, SATNet completely fails at visual Sudoku.
We propose an MNIST based test as an easy instance of the symbol grounding problem that can serve as a sanity check for differentiable symbolic solvers in general.
arXiv Detail & Related papers (2023-12-13T04:29:32Z) - SatLM: Satisfiability-Aided Language Models Using Declarative Prompting [68.40726892904286]
We propose a new satisfiability-aided language modeling (SatLM) approach for improving the reasoning capabilities of large language models (LLMs)
We use an LLM to generate a declarative task specification rather than an imperative program and leverage an off-the-shelf automated theorem prover to derive the final answer.
We evaluate SATLM on 8 different datasets and show that it consistently outperforms program-aided LMs in the imperative paradigm.
arXiv Detail & Related papers (2023-05-16T17:55:51Z) - Estimating the hardness of SAT encodings for Logical Equivalence
Checking of Boolean circuits [58.83758257568434]
We show that the hardness of SAT encodings for LEC instances can be estimated textitw.r.t some SAT partitioning.
The paper proposes several methods for constructing partitionings, which, when used in practice, allow one to estimate the hardness of SAT encodings for LEC with good accuracy.
arXiv Detail & Related papers (2022-10-04T09:19:13Z) - SATformer: Transformer-Based UNSAT Core Learning [0.0]
This paper introduces SATformer, a Transformer-based approach for the Boolean Satisfiability (SAT) problem.
Rather than solving the problem directly, SATformer approaches the problem from the opposite direction by focusing on unsatisfiability.
SATformer is trained through a multi-task learning approach, using the single-bit satisfiability result and the minimal unsatisfiable core.
Experimental results show that our SATformer can decrease the runtime of existing solvers by an average of 21.33%.
arXiv Detail & Related papers (2022-09-02T11:17:39Z) - Learning Symmetric Rules with SATNet [9.238700679836855]
We present SymSATNet, a variant of SATNet that translates the given symmetries of the target rules to a condition on the parameters of SATNet.
We also describe a technique for automatically discovering symmetries of the target rules from examples.
Our experiments with Sudoku and Rubik's cube show the substantial improvement of SymSATNet over the baseline SATNet.
arXiv Detail & Related papers (2022-06-28T13:36:34Z) - DeepSAT: An EDA-Driven Learning Framework for SAT [9.111341161918375]
We present DeepSAT, a novel end-to-end learning framework for the Boolean satisfiability (SAT) problem.
DeepSAT achieves significant accuracy improvements over state-of-the-art learning-based SAT solutions.
arXiv Detail & Related papers (2022-05-27T03:20:42Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Learning Symbolic Rules for Reasoning in Quasi-Natural Language [74.96601852906328]
We build a rule-based system that can reason with natural language input but without the manual construction of rules.
We propose MetaQNL, a "Quasi-Natural" language that can express both formal logic and natural language sentences.
Our approach achieves state-of-the-art accuracy on multiple reasoning benchmarks.
arXiv Detail & Related papers (2021-11-23T17:49:00Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z) - RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs [91.71504177786792]
This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
arXiv Detail & Related papers (2020-10-08T14:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.