DeepStochLog: Neural Stochastic Logic Programming
- URL: http://arxiv.org/abs/2106.12574v1
- Date: Wed, 23 Jun 2021 17:59:04 GMT
- Title: DeepStochLog: Neural Stochastic Logic Programming
- Authors: Thomas Winters, Giuseppe Marra, Robin Manhaeve, Luc De Raedt
- Abstract summary: We show that inference and learning in neural logic programming scale much better than for neural probabilistic logic programs.
DeepStochLog achieves state-of-the-art results on challenging neural symbolic learning tasks.
- Score: 15.938755941588159
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in neural symbolic learning, such as DeepProbLog, extend
probabilistic logic programs with neural predicates. Like graphical models,
these probabilistic logic programs define a probability distribution over
possible worlds, for which inference is computationally hard. We propose
DeepStochLog, an alternative neural symbolic framework based on stochastic
definite clause grammars, a type of stochastic logic program, which defines a
probability distribution over possible derivations. More specifically, we
introduce neural grammar rules into stochastic definite clause grammars to
create a framework that can be trained end-to-end. We show that inference and
learning in neural stochastic logic programming scale much better than for
neural probabilistic logic programs. Furthermore, the experimental evaluation
shows that DeepStochLog achieves state-of-the-art results on challenging neural
symbolic learning tasks.
Related papers
- Towards Probabilistic Inductive Logic Programming with Neurosymbolic Inference and Relaxation [0.0]
We propose Propper, which handles flawed and probabilistic background knowledge.
For relational patterns in noisy images, Propper can learn programs from as few as 8 examples.
It outperforms binary ILP and statistical models such as a Graph Neural Network.
arXiv Detail & Related papers (2024-08-21T06:38:49Z) - dPASP: A Comprehensive Differentiable Probabilistic Answer Set
Programming Environment For Neurosymbolic Learning and Reasoning [0.0]
We present dPASP, a novel declarative logic programming framework for differentiable neuro-symbolic reasoning.
We discuss the several semantics for probabilistic logic programs that can express nondeterministic, contradictory, incomplete and/or statistical knowledge.
We then describe an implemented package that supports inference and learning in the language, along with several example programs.
arXiv Detail & Related papers (2023-08-05T19:36:58Z) - Scalable Neural-Probabilistic Answer Set Programming [18.136093815001423]
We introduce SLASH, a novel DPPL that consists of Neural-Probabilistic Predicates (NPPs) and a logic program, united via answer set programming (ASP)
We show how to prune the insignificantally insignificant parts of the (ground) program, speeding up reasoning without sacrificing the predictive performance.
We evaluate SLASH on a variety of different tasks, including the benchmark task of MNIST addition and Visual Question Answering (VQA)
arXiv Detail & Related papers (2023-06-14T09:45:29Z) - Neural Probabilistic Logic Programming in Discrete-Continuous Domains [9.94537457589893]
Neural-symbolic AI (NeSy) allows neural networks to exploit symbolic background knowledge in the form of logic.
Probabilistic NeSy focuses on integrating neural networks with both logic and probability theory.
DeepSeaProbLog is a neural probabilistic logic programming language that incorporates DPP techniques into NeSy.
arXiv Detail & Related papers (2023-03-08T15:27:29Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - Refining neural network predictions using background knowledge [68.35246878394702]
We show we can use logical background knowledge in learning system to compensate for a lack of labeled training data.
We introduce differentiable refinement functions that find a corrected prediction close to the original prediction.
This algorithm finds optimal refinements on complex SAT formulas in significantly fewer iterations and frequently finds solutions where gradient descent can not.
arXiv Detail & Related papers (2022-06-10T10:17:59Z) - VAEL: Bridging Variational Autoencoders and Probabilistic Logic
Programming [3.759936323189418]
We present VAEL, a neuro-symbolic generative model integrating variational autoencoders (VAE) with the reasoning capabilities of probabilistic logic (L) programming.
arXiv Detail & Related papers (2022-02-07T10:16:53Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Neuro-symbolic Neurodegenerative Disease Modeling as Probabilistic
Programmed Deep Kernels [93.58854458951431]
We present a probabilistic programmed deep kernel learning approach to personalized, predictive modeling of neurodegenerative diseases.
Our analysis considers a spectrum of neural and symbolic machine learning approaches.
We run evaluations on the problem of Alzheimer's disease prediction, yielding results that surpass deep learning.
arXiv Detail & Related papers (2020-09-16T15:16:03Z) - Neural Logic Reasoning [47.622957656745356]
We propose Logic-Integrated Neural Network (LINN) to integrate the power of deep learning and logic reasoning.
LINN learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the network for inference.
Experiments show that LINN significantly outperforms state-of-the-art recommendation models in Top-K recommendation.
arXiv Detail & Related papers (2020-08-20T14:53:23Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.