dPASP: A Comprehensive Differentiable Probabilistic Answer Set
Programming Environment For Neurosymbolic Learning and Reasoning
- URL: http://arxiv.org/abs/2308.02944v1
- Date: Sat, 5 Aug 2023 19:36:58 GMT
- Title: dPASP: A Comprehensive Differentiable Probabilistic Answer Set
Programming Environment For Neurosymbolic Learning and Reasoning
- Authors: Renato Lui Geh, Jonas Gon\c{c}alves, Igor Cataneo Silveira, Denis
Deratani Mau\'a, Fabio Gagliardi Cozman
- Abstract summary: We present dPASP, a novel declarative logic programming framework for differentiable neuro-symbolic reasoning.
We discuss the several semantics for probabilistic logic programs that can express nondeterministic, contradictory, incomplete and/or statistical knowledge.
We then describe an implemented package that supports inference and learning in the language, along with several example programs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present dPASP, a novel declarative probabilistic logic programming
framework for differentiable neuro-symbolic reasoning. The framework allows for
the specification of discrete probabilistic models with neural predicates,
logic constraints and interval-valued probabilistic choices, thus supporting
models that combine low-level perception (images, texts, etc), common-sense
reasoning, and (vague) statistical knowledge. To support all such features, we
discuss the several semantics for probabilistic logic programs that can express
nondeterministic, contradictory, incomplete and/or statistical knowledge. We
also discuss how gradient-based learning can be performed with neural
predicates and probabilistic choices under selected semantics. We then describe
an implemented package that supports inference and learning in the language,
along with several example programs. The package requires minimal user
knowledge of deep learning system's inner workings, while allowing end-to-end
training of rather sophisticated models and loss functions.
Related papers
- Towards Probabilistic Inductive Logic Programming with Neurosymbolic Inference and Relaxation [0.0]
We propose Propper, which handles flawed and probabilistic background knowledge.
For relational patterns in noisy images, Propper can learn programs from as few as 8 examples.
It outperforms binary ILP and statistical models such as a Graph Neural Network.
arXiv Detail & Related papers (2024-08-21T06:38:49Z) - Scalable Neural-Probabilistic Answer Set Programming [18.136093815001423]
We introduce SLASH, a novel DPPL that consists of Neural-Probabilistic Predicates (NPPs) and a logic program, united via answer set programming (ASP)
We show how to prune the insignificantally insignificant parts of the (ground) program, speeding up reasoning without sacrificing the predictive performance.
We evaluate SLASH on a variety of different tasks, including the benchmark task of MNIST addition and Visual Question Answering (VQA)
arXiv Detail & Related papers (2023-06-14T09:45:29Z) - Neural Probabilistic Logic Programming in Discrete-Continuous Domains [9.94537457589893]
Neural-symbolic AI (NeSy) allows neural networks to exploit symbolic background knowledge in the form of logic.
Probabilistic NeSy focuses on integrating neural networks with both logic and probability theory.
DeepSeaProbLog is a neural probabilistic logic programming language that incorporates DPP techniques into NeSy.
arXiv Detail & Related papers (2023-03-08T15:27:29Z) - $\omega$PAP Spaces: Reasoning Denotationally About Higher-Order,
Recursive Probabilistic and Differentiable Programs [64.25762042361839]
$omega$PAP spaces are spaces for reasoning denotationally about expressive differentiable and probabilistic programming languages.
Our semantics is general enough to assign meanings to most practical probabilistic and differentiable programs.
We establish the almost-everywhere differentiability of probabilistic programs' trace density functions.
arXiv Detail & Related papers (2023-02-21T12:50:05Z) - Semantic Probabilistic Layers for Neuro-Symbolic Learning [83.25785999205932]
We design a predictive layer for structured-output prediction (SOP)
It can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic constraints.
Our Semantic Probabilistic Layer (SPL) can model intricate correlations, and hard constraints, over a structured output space.
arXiv Detail & Related papers (2022-06-01T12:02:38Z) - VAEL: Bridging Variational Autoencoders and Probabilistic Logic
Programming [3.759936323189418]
We present VAEL, a neuro-symbolic generative model integrating variational autoencoders (VAE) with the reasoning capabilities of probabilistic logic (L) programming.
arXiv Detail & Related papers (2022-02-07T10:16:53Z) - Structural Learning of Probabilistic Sentential Decision Diagrams under
Partial Closed-World Assumption [127.439030701253]
Probabilistic sentential decision diagrams are a class of structured-decomposable circuits.
We propose a new scheme based on a partial closed-world assumption: data implicitly provide the logical base of the circuit.
Preliminary experiments show that the proposed approach might properly fit training data, and generalize well to test data, provided that these remain consistent with the underlying logical base.
arXiv Detail & Related papers (2021-07-26T12:01:56Z) - DeepStochLog: Neural Stochastic Logic Programming [15.938755941588159]
We show that inference and learning in neural logic programming scale much better than for neural probabilistic logic programs.
DeepStochLog achieves state-of-the-art results on challenging neural symbolic learning tasks.
arXiv Detail & Related papers (2021-06-23T17:59:04Z) - Online Learning Probabilistic Event Calculus Theories in Answer Set
Programming [70.06301658267125]
Event Recognition (CER) systems detect occurrences in streaming time-stamped datasets using predefined event patterns.
We present a system based on Answer Set Programming (ASP), capable of probabilistic reasoning with complex event patterns in the form of rules weighted in the Event Calculus.
Our results demonstrate the superiority of our novel approach, both terms efficiency and predictive.
arXiv Detail & Related papers (2021-03-31T23:16:29Z) - Estimating Structural Target Functions using Machine Learning and
Influence Functions [103.47897241856603]
We propose a new framework for statistical machine learning of target functions arising as identifiable functionals from statistical models.
This framework is problem- and model-agnostic and can be used to estimate a broad variety of target parameters of interest in applied statistics.
We put particular focus on so-called coarsening at random/doubly robust problems with partially unobserved information.
arXiv Detail & Related papers (2020-08-14T16:48:29Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.