SLASH: Embracing Probabilistic Circuits into Neural Answer Set
Programming
- URL: http://arxiv.org/abs/2110.03395v1
- Date: Thu, 7 Oct 2021 12:35:55 GMT
- Title: SLASH: Embracing Probabilistic Circuits into Neural Answer Set
Programming
- Authors: Arseny Skryagin, Wolfgang Stammer, Daniel Ochs, Devendra Singh Dhami,
Kristian Kersting
- Abstract summary: We introduce SLASH -- a novel deep probabilistic programming language (DPPL)
At its core, SLASH consists of Neural-Probabilistic Predicates (NPPs) and logical programs which are united via answer set programming.
We evaluate SLASH on the benchmark data of MNIST addition as well as novel tasks for DPPLs such as missing data prediction and set prediction with state-of-the-art performance.
- Score: 15.814914345000574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of combining the robustness of neural networks and the expressivity
of symbolic methods has rekindled the interest in neuro-symbolic AI. Recent
advancements in neuro-symbolic AI often consider specifically-tailored
architectures consisting of disjoint neural and symbolic components, and thus
do not exhibit desired gains that can be achieved by integrating them into a
unifying framework. We introduce SLASH -- a novel deep probabilistic
programming language (DPPL). At its core, SLASH consists of
Neural-Probabilistic Predicates (NPPs) and logical programs which are united
via answer set programming. The probability estimates resulting from NPPs act
as the binding element between the logical program and raw input data, thereby
allowing SLASH to answer task-dependent logical queries. This allows SLASH to
elegantly integrate the symbolic and neural components in a unified framework.
We evaluate SLASH on the benchmark data of MNIST addition as well as novel
tasks for DPPLs such as missing data prediction and set prediction with
state-of-the-art performance, thereby showing the effectiveness and generality
of our method.
Related papers
- The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - Scalable Neural-Probabilistic Answer Set Programming [18.136093815001423]
We introduce SLASH, a novel DPPL that consists of Neural-Probabilistic Predicates (NPPs) and a logic program, united via answer set programming (ASP)
We show how to prune the insignificantally insignificant parts of the (ground) program, speeding up reasoning without sacrificing the predictive performance.
We evaluate SLASH on a variety of different tasks, including the benchmark task of MNIST addition and Visual Question Answering (VQA)
arXiv Detail & Related papers (2023-06-14T09:45:29Z) - Symbolic Synthesis of Neural Networks [0.0]
I present Graph-basedally Synthesized Neural Networks (GSSNNs)
GSSNNs are a form of neural network whose topology and parameters are informed by the output of a symbolic program.
I demonstrate that by developing symbolic abstractions at a population level, I can elicit reliable patterns of improved generalization with small quantities of data known to contain local and discrete features.
arXiv Detail & Related papers (2023-03-06T18:13:14Z) - A-NeSI: A Scalable Approximate Method for Probabilistic Neurosymbolic
Inference [11.393328084369783]
Recently introduced frameworks for Probabilistic Neurosymbolic Learning (PNL), such as DeepProbLog, perform exponential-time exact inference.
We introduce Approximate Neurosymbolic Inference (A-NeSI), a new framework for PNL that uses scalable neural networks for approximate inference.
arXiv Detail & Related papers (2022-12-23T15:24:53Z) - Neural-Symbolic Recursive Machine for Systematic Generalization [113.22455566135757]
We introduce the Neural-Symbolic Recursive Machine (NSR), whose core is a Grounded Symbol System (GSS)
NSR integrates neural perception, syntactic parsing, and semantic reasoning.
We evaluate NSR's efficacy across four challenging benchmarks designed to probe systematic generalization capabilities.
arXiv Detail & Related papers (2022-10-04T13:27:38Z) - Semantic Probabilistic Layers for Neuro-Symbolic Learning [83.25785999205932]
We design a predictive layer for structured-output prediction (SOP)
It can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic constraints.
Our Semantic Probabilistic Layer (SPL) can model intricate correlations, and hard constraints, over a structured output space.
arXiv Detail & Related papers (2022-06-01T12:02:38Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - VAEL: Bridging Variational Autoencoders and Probabilistic Logic
Programming [3.759936323189418]
We present VAEL, a neuro-symbolic generative model integrating variational autoencoders (VAE) with the reasoning capabilities of probabilistic logic (L) programming.
arXiv Detail & Related papers (2022-02-07T10:16:53Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Closed Loop Neural-Symbolic Learning via Integrating Neural Perception,
Grammar Parsing, and Symbolic Reasoning [134.77207192945053]
Prior methods learn the neural-symbolic models using reinforcement learning approaches.
We introduce the textbfgrammar model as a textitsymbolic prior to bridge neural perception and symbolic reasoning.
We propose a novel textbfback-search algorithm which mimics the top-down human-like learning procedure to propagate the error.
arXiv Detail & Related papers (2020-06-11T17:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.