Towards Invertible Semantic-Preserving Embeddings of Logical Formulae
- URL: http://arxiv.org/abs/2305.03143v1
- Date: Wed, 3 May 2023 10:49:01 GMT
- Title: Towards Invertible Semantic-Preserving Embeddings of Logical Formulae
- Authors: Gaia Saveri and Luca Bortolussi
- Abstract summary: Learning and optimising logic requirements and rules has always been an important problem in Artificial Intelligence.
Current methods are able to construct effective semantic-preserving embeddings via kernel methods, but the map they define is not invertible.
In this work we address this problem, learning how to invert such an embedding leveraging deep architectures based on the Graph Variational Autoencoder framework.
- Score: 1.0152838128195467
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Logic is the main formal language to perform automated reasoning, and it is
further a human-interpretable language, at least for small formulae. Learning
and optimising logic requirements and rules has always been an important
problem in Artificial Intelligence. State of the art Machine Learning (ML)
approaches are mostly based on gradient descent optimisation in continuous
spaces, while learning logic is framed in the discrete syntactic space of
formulae. Using continuous optimisation to learn logic properties is a
challenging problem, requiring to embed formulae in a continuous space in a
meaningful way, i.e. preserving the semantics. Current methods are able to
construct effective semantic-preserving embeddings via kernel methods (for
linear temporal logic), but the map they define is not invertible. In this work
we address this problem, learning how to invert such an embedding leveraging
deep architectures based on the Graph Variational Autoencoder framework. We
propose a novel model specifically designed for this setting, justifying our
design choices through an extensive experimental evaluation. Reported results
in the context of propositional logic are promising, and several challenges
regarding learning invertible embeddings of formulae are highlighted and
addressed.
Related papers
- Efficiently Learning Probabilistic Logical Models by Cheaply Ranking Mined Rules [9.303501974597548]
We introduce precision and recall for logical rules and define their composition as rule utility.
We introduce SPECTRUM, a scalable framework for learning logical models from relational data.
arXiv Detail & Related papers (2024-09-24T16:54:12Z) - Learning to Estimate System Specifications in Linear Temporal Logic using Transformers and Mamba [6.991281327290525]
specification mining involves extracting temporal logic formulae from system traces.
We introduce autore models that can generate linear temporal logic formulae from traces.
We devise a metric for the distinctiveness of the generated formulae and an algorithm to enforce the syntax constraints.
arXiv Detail & Related papers (2024-05-31T15:21:53Z) - stl2vec: Semantic and Interpretable Vector Representation of Temporal Logic [0.5956301166481089]
We propose a semantically grounded vector representation (feature embedding) of logic formulae.
We compute continuous embeddings of formulae with several desirable properties.
We demonstrate the efficacy of the approach in two tasks: learning model checking and neurosymbolic framework.
arXiv Detail & Related papers (2024-05-23T10:04:56Z) - Empower Nested Boolean Logic via Self-Supervised Curriculum Learning [67.46052028752327]
We find that any pre-trained language models even including large language models only behave like a random selector in the face of multi-nested logic.
To empower language models with this fundamental capability, this paper proposes a new self-supervised learning method textitCurriculum Logical Reasoning (textscClr)
arXiv Detail & Related papers (2023-10-09T06:54:02Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Unifying Framework for Optimizations in non-boolean Formalisms [0.6853165736531939]
Many popular automated reasoning paradigms provide languages supporting optimization statements.
Here we propose a unifying framework that eliminates syntactic distinctions between paradigms.
We study formal properties of the proposed systems that translate into formal properties of paradigms that can be captured within our framework.
arXiv Detail & Related papers (2022-06-16T00:38:19Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Refining Labelled Systems for Modal and Constructive Logics with
Applications [0.0]
This thesis serves as a means of transforming the semantics of a modal and/or constructive logic into an 'economical' proof system.
The refinement method connects two proof-theoretic paradigms: labelled and nested sequent calculi.
The introduced refined labelled calculi will be used to provide the first proof-search algorithms for deontic STIT logics.
arXiv Detail & Related papers (2021-07-30T08:27:15Z) - Multi-Agent Reinforcement Learning with Temporal Logic Specifications [65.79056365594654]
We study the problem of learning to satisfy temporal logic specifications with a group of agents in an unknown environment.
We develop the first multi-agent reinforcement learning technique for temporal logic specifications.
We provide correctness and convergence guarantees for our main algorithm.
arXiv Detail & Related papers (2021-02-01T01:13:03Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z) - Logical Natural Language Generation from Open-Domain Tables [107.04385677577862]
We propose a new task where a model is tasked with generating natural language statements that can be emphlogically entailed by the facts.
To facilitate the study of the proposed logical NLG problem, we use the existing TabFact dataset citechen 2019tabfact featured with a wide range of logical/symbolic inferences.
The new task poses challenges to the existing monotonic generation frameworks due to the mismatch between sequence order and logical order.
arXiv Detail & Related papers (2020-04-22T06:03:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.