A-NeSI: A Scalable Approximate Method for Probabilistic Neurosymbolic
Inference
- URL: http://arxiv.org/abs/2212.12393v3
- Date: Fri, 22 Sep 2023 17:46:24 GMT
- Title: A-NeSI: A Scalable Approximate Method for Probabilistic Neurosymbolic
Inference
- Authors: Emile van Krieken, Thiviyan Thanapalasingam, Jakub M. Tomczak, Frank
van Harmelen, Annette ten Teije
- Abstract summary: Recently introduced frameworks for Probabilistic Neurosymbolic Learning (PNL), such as DeepProbLog, perform exponential-time exact inference.
We introduce Approximate Neurosymbolic Inference (A-NeSI), a new framework for PNL that uses scalable neural networks for approximate inference.
- Score: 11.393328084369783
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of combining neural networks with symbolic reasoning.
Recently introduced frameworks for Probabilistic Neurosymbolic Learning (PNL),
such as DeepProbLog, perform exponential-time exact inference, limiting the
scalability of PNL solutions. We introduce Approximate Neurosymbolic Inference
(A-NeSI): a new framework for PNL that uses neural networks for scalable
approximate inference. A-NeSI 1) performs approximate inference in polynomial
time without changing the semantics of probabilistic logics; 2) is trained
using data generated by the background knowledge; 3) can generate symbolic
explanations of predictions; and 4) can guarantee the satisfaction of logical
constraints at test time, which is vital in safety-critical applications. Our
experiments show that A-NeSI is the first end-to-end method to solve three
neurosymbolic tasks with exponential combinatorial scaling. Finally, our
experiments show that A-NeSI achieves explainability and safety without a
penalty in performance.
Related papers
- Towards Probabilistic Inductive Logic Programming with Neurosymbolic Inference and Relaxation [0.0]
We propose Propper, which handles flawed and probabilistic background knowledge.
For relational patterns in noisy images, Propper can learn programs from as few as 8 examples.
It outperforms binary ILP and statistical models such as a Graph Neural Network.
arXiv Detail & Related papers (2024-08-21T06:38:49Z) - EXPLAIN, AGREE, LEARN: Scaling Learning for Neural Probabilistic Logic [14.618208661185365]
We propose a sampling based objective to scale learning to more complex systems.
We prove that the objective has a bounded error with respect to the likelihood, which vanishes when increasing the sample count.
We then develop the EXPLAIN, AGREE, LEARN (EXAL) method that uses this objective.
In contrast to previous NeSy methods, EXAL can scale to larger problem sizes while retaining theoretical guarantees on the error.
arXiv Detail & Related papers (2024-08-15T13:07:51Z) - On the Hardness of Probabilistic Neurosymbolic Learning [10.180468225166441]
We study the complexity of differentiating probabilistic reasoning in neurosymbolic models.
We introduce WeightME, an unbiased gradient estimator based on model sampling.
Our experiments indicate that the existing biased approximations indeed struggle to optimize even when exact solving is still feasible.
arXiv Detail & Related papers (2024-06-06T19:56:33Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - SLASH: Embracing Probabilistic Circuits into Neural Answer Set
Programming [15.814914345000574]
We introduce SLASH -- a novel deep probabilistic programming language (DPPL)
At its core, SLASH consists of Neural-Probabilistic Predicates (NPPs) and logical programs which are united via answer set programming.
We evaluate SLASH on the benchmark data of MNIST addition as well as novel tasks for DPPLs such as missing data prediction and set prediction with state-of-the-art performance.
arXiv Detail & Related papers (2021-10-07T12:35:55Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.