Techniques for Symbol Grounding with SATNet
- URL: http://arxiv.org/abs/2106.11072v1
- Date: Wed, 16 Jun 2021 18:42:12 GMT
- Title: Techniques for Symbol Grounding with SATNet
- Authors: Sever Topan, David Rolnick, Xujie Si
- Abstract summary: We present a self-supervised pre-training pipeline that enables SATNet to overcome the Symbol Grounding Problem.
We demonstrate that our method allows SATNet to attain full accuracy even with a harder problem setup.
We additionally introduce a proofreading method that further improves the performance of SATNet architectures.
- Score: 19.70820500506061
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many experts argue that the future of artificial intelligence is limited by
the field's ability to integrate symbolic logical reasoning into deep learning
architectures. The recently proposed differentiable MAXSAT solver, SATNet, was
a breakthrough in its capacity to integrate with a traditional neural network
and solve visual reasoning problems. For instance, it can learn the rules of
Sudoku purely from image examples. Despite its success, SATNet was shown to
succumb to a key challenge in neurosymbolic systems known as the Symbol
Grounding Problem: the inability to map visual inputs to symbolic variables
without explicit supervision ("label leakage"). In this work, we present a
self-supervised pre-training pipeline that enables SATNet to overcome this
limitation, thus broadening the class of problems that SATNet architectures can
solve to include datasets where no intermediary labels are available at all. We
demonstrate that our method allows SATNet to attain full accuracy even with a
harder problem setup that prevents any label leakage. We additionally introduce
a proofreading method that further improves the performance of SATNet
architectures, beating the state-of-the-art on Visual Sudoku.
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - GraSS: Combining Graph Neural Networks with Expert Knowledge for SAT Solver Selection [45.222591894755105]
We present GraSS, a novel approach for automatic SAT solver selection based on tripartite graph representations of instances.
We enrich the graph representation with domain-specific decisions, such as novel node feature design.
We demonstrate that this combination of raw representations and domain-specific choices leads to improvements in runtime.
arXiv Detail & Related papers (2024-05-17T18:00:50Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - Assessing SATNet's Ability to Solve the Symbol Grounding Problem [19.52506478494672]
SATNet is an award-winning MAXSAT solver that can be used to infer logical rules and integrated as a differentiable layer in a deep neural network.
We show that in the absence of intermediate labels that identify individual Sudoku digit images with their logical representations, SATNet completely fails at visual Sudoku.
We propose an MNIST based test as an easy instance of the symbol grounding problem that can serve as a sanity check for differentiable symbolic solvers in general.
arXiv Detail & Related papers (2023-12-13T04:29:32Z) - A Neural Network-based SAT-Resilient Obfuscation Towards Enhanced Logic
Locking [3.076761061950216]
We propose a neural network-based unSAT clause translator, SATConda.
SATConda incurs a minimal area and power overhead while preserving the original functionality with impenetrable security.
Our proposed SATConda is evaluated on ISCAS85 and ISCAS89 benchmarks.
arXiv Detail & Related papers (2022-09-13T07:59:27Z) - Machine Learning Methods in Solving the Boolean Satisfiability Problem [72.21206588430645]
The paper reviews the recent literature on solving the Boolean satisfiability problem (SAT) with machine learning techniques.
We examine the evolving ML-SAT solvers from naive classifiers with handcrafted features to the emerging end-to-end SAT solvers such as NeuroSAT.
arXiv Detail & Related papers (2022-03-02T05:14:12Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z) - Goal-Aware Neural SAT Solver [2.609784101826762]
Modern neural networks obtain information about the problem and calculate the output solely from the input values.
We argue that it is not always optimal, and the network's performance can be significantly improved by augmenting it with a query mechanism.
We propose a neural SAT solver with a query mechanism called QuerySAT and show that it outperforms the neural baseline on a wide range of SAT tasks.
arXiv Detail & Related papers (2021-06-14T04:51:24Z) - Synbols: Probing Learning Algorithms with Synthetic Datasets [112.45883250213272]
Synbols is a tool for rapidly generating new datasets with a rich composition of latent features rendered in low resolution images.
Our tool's high-level interface provides a language for rapidly generating new distributions on the latent features.
To showcase the versatility of Synbols, we use it to dissect the limitations and flaws in standard learning algorithms in various learning setups.
arXiv Detail & Related papers (2020-09-14T13:03:27Z) - Enhancing SAT solvers with glue variable predictions [2.635832975589208]
We modify the state-of-the-art SAT solver CaDiCaL, improving its performance on SATCOMP 2018 and SATRACE 2019 with supervised learning and its performance on a dataset of SHA-1 preimage attacks with reinforcement learning.
We address all these limitations, using a simpler network architecture allowing CPU inference for even large industrial problems with millions of clauses, and training instead to predict em glue variables---a target for which it is easier to generate labelled data, and which can also be formulated as a reinforcement learning task.
arXiv Detail & Related papers (2020-07-06T07:16:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.