VAEL: Bridging Variational Autoencoders and Probabilistic Logic
Programming
- URL: http://arxiv.org/abs/2202.04178v1
- Date: Mon, 7 Feb 2022 10:16:53 GMT
- Title: VAEL: Bridging Variational Autoencoders and Probabilistic Logic
Programming
- Authors: Eleonora Misino, Giuseppe Marra, Emanuele Sansone
- Abstract summary: We present VAEL, a neuro-symbolic generative model integrating variational autoencoders (VAE) with the reasoning capabilities of probabilistic logic (L) programming.
- Score: 3.759936323189418
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present VAEL, a neuro-symbolic generative model integrating variational
autoencoders (VAE) with the reasoning capabilities of probabilistic logic (L)
programming. Besides standard latent subsymbolic variables, our model exploits
a probabilistic logic program to define a further structured representation,
which is used for logical reasoning. The entire process is end-to-end
differentiable. Once trained, VAEL can solve new unseen generation tasks by (i)
leveraging the previously acquired knowledge encoded in the neural component
and (ii) exploiting new logical programs on the structured latent space. Our
experiments provide support on the benefits of this neuro-symbolic integration
both in terms of task generalization and data efficiency. To the best of our
knowledge, this work is the first to propose a general-purpose end-to-end
framework integrating probabilistic logic programming into a deep generative
model.
Related papers
- stl2vec: Semantic and Interpretable Vector Representation of Temporal Logic [0.5956301166481089]
We propose a semantically grounded vector representation (feature embedding) of logic formulae.
We compute continuous embeddings of formulae with several desirable properties.
We demonstrate the efficacy of the approach in two tasks: learning model checking and neurosymbolic framework.
arXiv Detail & Related papers (2024-05-23T10:04:56Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - dPASP: A Comprehensive Differentiable Probabilistic Answer Set
Programming Environment For Neurosymbolic Learning and Reasoning [0.0]
We present dPASP, a novel declarative logic programming framework for differentiable neuro-symbolic reasoning.
We discuss the several semantics for probabilistic logic programs that can express nondeterministic, contradictory, incomplete and/or statistical knowledge.
We then describe an implemented package that supports inference and learning in the language, along with several example programs.
arXiv Detail & Related papers (2023-08-05T19:36:58Z) - Join-Chain Network: A Logical Reasoning View of the Multi-head Attention
in Transformer [59.73454783958702]
We propose a symbolic reasoning architecture that chains many join operators together to model output logical expressions.
In particular, we demonstrate that such an ensemble of join-chains can express a broad subset of ''tree-structured'' first-order logical expressions, named FOET.
We find that the widely used multi-head self-attention module in transformer can be understood as a special neural operator that implements the union bound of the join operator in probabilistic predicate space.
arXiv Detail & Related papers (2022-10-06T07:39:58Z) - LogiGAN: Learning Logical Reasoning via Adversarial Pre-training [58.11043285534766]
We present LogiGAN, an unsupervised adversarial pre-training framework for improving logical reasoning abilities of language models.
Inspired by the facilitation effect of reflective thinking in human learning, we simulate the learning-thinking process with an adversarial Generator-Verifier architecture.
Both base and large size language models pre-trained with LogiGAN demonstrate obvious performance improvement on 12 datasets.
arXiv Detail & Related papers (2022-05-18T08:46:49Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - SLASH: Embracing Probabilistic Circuits into Neural Answer Set
Programming [15.814914345000574]
We introduce SLASH -- a novel deep probabilistic programming language (DPPL)
At its core, SLASH consists of Neural-Probabilistic Predicates (NPPs) and logical programs which are united via answer set programming.
We evaluate SLASH on the benchmark data of MNIST addition as well as novel tasks for DPPLs such as missing data prediction and set prediction with state-of-the-art performance.
arXiv Detail & Related papers (2021-10-07T12:35:55Z) - DeepStochLog: Neural Stochastic Logic Programming [15.938755941588159]
We show that inference and learning in neural logic programming scale much better than for neural probabilistic logic programs.
DeepStochLog achieves state-of-the-art results on challenging neural symbolic learning tasks.
arXiv Detail & Related papers (2021-06-23T17:59:04Z) - Multi-Agent Reinforcement Learning with Temporal Logic Specifications [65.79056365594654]
We study the problem of learning to satisfy temporal logic specifications with a group of agents in an unknown environment.
We develop the first multi-agent reinforcement learning technique for temporal logic specifications.
We provide correctness and convergence guarantees for our main algorithm.
arXiv Detail & Related papers (2021-02-01T01:13:03Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z) - Relational Neural Machines [19.569025323453257]
This paper presents a novel framework allowing jointly train the parameters of the learners and of a First-Order Logic based reasoner.
A Neural Machine is able recover both classical learning results in case of pure sub-symbolic learning, and Markov Logic Networks.
Proper algorithmic solutions are devised to make learning and inference tractable in large-scale problems.
arXiv Detail & Related papers (2020-02-06T10:53:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.