Neural Probabilistic Logic Programming in Discrete-Continuous Domains
- URL: http://arxiv.org/abs/2303.04660v1
- Date: Wed, 8 Mar 2023 15:27:29 GMT
- Title: Neural Probabilistic Logic Programming in Discrete-Continuous Domains
- Authors: Lennert De Smet and Pedro Zuidberg Dos Martires and Robin Manhaeve and
Giuseppe Marra and Angelika Kimmig and Luc De Readt
- Abstract summary: Neural-symbolic AI (NeSy) allows neural networks to exploit symbolic background knowledge in the form of logic.
Probabilistic NeSy focuses on integrating neural networks with both logic and probability theory.
DeepSeaProbLog is a neural probabilistic logic programming language that incorporates DPP techniques into NeSy.
- Score: 9.94537457589893
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural-symbolic AI (NeSy) allows neural networks to exploit symbolic
background knowledge in the form of logic. It has been shown to aid learning in
the limited data regime and to facilitate inference on out-of-distribution
data. Probabilistic NeSy focuses on integrating neural networks with both logic
and probability theory, which additionally allows learning under uncertainty. A
major limitation of current probabilistic NeSy systems, such as DeepProbLog, is
their restriction to finite probability distributions, i.e., discrete random
variables. In contrast, deep probabilistic programming (DPP) excels in
modelling and optimising continuous probability distributions. Hence, we
introduce DeepSeaProbLog, a neural probabilistic logic programming language
that incorporates DPP techniques into NeSy. Doing so results in the support of
inference and learning of both discrete and continuous probability
distributions under logical constraints. Our main contributions are 1) the
semantics of DeepSeaProbLog and its corresponding inference algorithm, 2) a
proven asymptotically unbiased learning algorithm, and 3) a series of
experiments that illustrate the versatility of our approach.
Related papers
- Towards Probabilistic Inductive Logic Programming with Neurosymbolic Inference and Relaxation [0.0]
We propose Propper, which handles flawed and probabilistic background knowledge.
For relational patterns in noisy images, Propper can learn programs from as few as 8 examples.
It outperforms binary ILP and statistical models such as a Graph Neural Network.
arXiv Detail & Related papers (2024-08-21T06:38:49Z) - Recurrent Neural Language Models as Probabilistic Finite-state Automata [66.23172872811594]
We study what classes of probability distributions RNN LMs can represent.
We show that simple RNNs are equivalent to a subclass of probabilistic finite-state automata.
These results present a first step towards characterizing the classes of distributions RNN LMs can represent.
arXiv Detail & Related papers (2023-10-08T13:36:05Z) - dPASP: A Comprehensive Differentiable Probabilistic Answer Set
Programming Environment For Neurosymbolic Learning and Reasoning [0.0]
We present dPASP, a novel declarative logic programming framework for differentiable neuro-symbolic reasoning.
We discuss the several semantics for probabilistic logic programs that can express nondeterministic, contradictory, incomplete and/or statistical knowledge.
We then describe an implemented package that supports inference and learning in the language, along with several example programs.
arXiv Detail & Related papers (2023-08-05T19:36:58Z) - Scalable Neural-Probabilistic Answer Set Programming [18.136093815001423]
We introduce SLASH, a novel DPPL that consists of Neural-Probabilistic Predicates (NPPs) and a logic program, united via answer set programming (ASP)
We show how to prune the insignificantally insignificant parts of the (ground) program, speeding up reasoning without sacrificing the predictive performance.
We evaluate SLASH on a variety of different tasks, including the benchmark task of MNIST addition and Visual Question Answering (VQA)
arXiv Detail & Related papers (2023-06-14T09:45:29Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - Logical Credal Networks [87.25387518070411]
This paper introduces Logical Credal Networks, an expressive probabilistic logic that generalizes many prior models that combine logic and probability.
We investigate its performance on maximum a posteriori inference tasks, including solving Mastermind games with uncertainty and detecting credit card fraud.
arXiv Detail & Related papers (2021-09-25T00:00:47Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - DeepStochLog: Neural Stochastic Logic Programming [15.938755941588159]
We show that inference and learning in neural logic programming scale much better than for neural probabilistic logic programs.
DeepStochLog achieves state-of-the-art results on challenging neural symbolic learning tasks.
arXiv Detail & Related papers (2021-06-23T17:59:04Z) - Probabilistic Deep Learning with Probabilistic Neural Networks and Deep
Probabilistic Models [0.6091702876917281]
We distinguish two approaches to probabilistic deep learning: probabilistic neural networks and deep probabilistic models.
Probabilistic deep learning is deep learning that accounts for uncertainty, both model uncertainty and data uncertainty.
arXiv Detail & Related papers (2021-05-31T22:13:21Z) - General stochastic separation theorems with optimal bounds [68.8204255655161]
Phenomenon of separability was revealed and used in machine learning to correct errors of Artificial Intelligence (AI) systems and analyze AI instabilities.
Errors or clusters of errors can be separated from the rest of the data.
The ability to correct an AI system also opens up the possibility of an attack on it, and the high dimensionality induces vulnerabilities caused by the same separability.
arXiv Detail & Related papers (2020-10-11T13:12:41Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.