Logic of Differentiable Logics: Towards a Uniform Semantics of DL
- URL: http://arxiv.org/abs/2303.10650v4
- Date: Thu, 5 Oct 2023 11:17:08 GMT
- Title: Logic of Differentiable Logics: Towards a Uniform Semantics of DL
- Authors: Natalia \'Slusarz, Ekaterina Komendantskaya, Matthew L. Daggitt,
Robert Stewart, Kathrin Stark
- Abstract summary: Differentiable logics (DLs) have been proposed as a method of training neural networks to satisfy logical specifications.
This paper proposes a meta-language for defining DLs that we call the Logic of Differentiable Logics, or LDL.
We use LDL to establish several theoretical properties of existing DLs, and to conduct their empirical study in neural network verification.
- Score: 1.1549572298362787
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differentiable logics (DL) have recently been proposed as a method of
training neural networks to satisfy logical specifications. A DL consists of a
syntax in which specifications are stated and an interpretation function that
translates expressions in the syntax into loss functions. These loss functions
can then be used during training with standard gradient descent algorithms. The
variety of existing DLs and the differing levels of formality with which they
are treated makes a systematic comparative study of their properties and
implementations difficult. This paper remedies this problem by suggesting a
meta-language for defining DLs that we call the Logic of Differentiable Logics,
or LDL. Syntactically, it generalises the syntax of existing DLs to FOL, and
for the first time introduces the formalism for reasoning about vectors and
learners. Semantically, it introduces a general interpretation function that
can be instantiated to define loss functions arising from different existing
DLs. We use LDL to establish several theoretical properties of existing DLs,
and to conduct their empirical study in neural network verification.
Related papers
- Large Language Models are Interpretable Learners [53.56735770834617]
In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge the gap between expressiveness and interpretability.
The pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts.
As the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable) and other LLMs.
arXiv Detail & Related papers (2024-06-25T02:18:15Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Logic-induced Diagnostic Reasoning for Semi-supervised Semantic
Segmentation [85.12429517510311]
LogicDiag is a neural-logic semi-supervised learning framework for semantic segmentation.
Our key insight is that conflicts within pseudo labels, identified through symbolic knowledge, can serve as strong yet commonly ignored learning signals.
We showcase the practical application of LogicDiag in the data-hungry segmentation scenario, where we formalize the structured abstraction of semantic concepts as a set of logic rules.
arXiv Detail & Related papers (2023-08-24T06:50:07Z) - Lattice-preserving $\mathcal{ALC}$ ontology embeddings with saturation [50.05281461410368]
An order-preserving embedding method is proposed to generate embeddings of OWL representations.
We show that our method outperforms state-the-art theory-of-the-art embedding methods in several knowledge base completion tasks.
arXiv Detail & Related papers (2023-05-11T22:27:51Z) - Dual Box Embeddings for the Description Logic EL++ [16.70961576041243]
Similar to Knowledge Graphs (KGs), Knowledge Graphs are often incomplete, and maintaining and constructing them has proved challenging.
Similar to KGs, a promising approach is to learn embeddings in a latent vector space, while additionally ensuring they adhere to the semantics of the underlying DL.
We propose a novel ontology embedding method named Box$2$EL for the DL EL++, which represents both concepts and roles as boxes.
arXiv Detail & Related papers (2023-01-26T14:13:37Z) - A Formal Comparison between Datalog-based Languages for Stream Reasoning
(extended version) [4.441335529279506]
The paper investigates the relative expressiveness of two logic-based languages for reasoning over streams.
We show that, without any restrictions, the two languages are incomparable and to identify fragments of each language that can be expressed via the other one.
arXiv Detail & Related papers (2022-08-26T15:27:21Z) - Learning First-Order Rules with Differentiable Logic Program Semantics [12.360002779872373]
We introduce a differentiable inductive logic programming model, called differentiable first-order rule learner (DFOL)
DFOL finds the correct LPs from relational facts by searching for the interpretable matrix representations of LPs.
Experimental results indicate that DFOL is a precise, robust, scalable, and computationally cheap differentiable ILP model.
arXiv Detail & Related papers (2022-04-28T15:33:43Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - SAT-Based Rigorous Explanations for Decision Lists [17.054229845836332]
Decision lists (DLs) find a wide range of uses for classification problems in Machine Learning (ML)
We argue that interpretability is an elusive goal for some DLs.
This paper shows that computing explanations for DLs is computationally hard.
arXiv Detail & Related papers (2021-05-14T12:06:12Z) - Defeasible reasoning in Description Logics: an overview on DL^N [10.151828072611426]
We provide an overview on DLN, illustrating the underlying knowledge engineering requirements as well as the characteristic features that preserve DLN from some recurrent semantic and computational drawbacks.
We also compare DLN with some alternative nonmonotonic semantics, enlightening the relationships between the KLMs and DLN.
arXiv Detail & Related papers (2020-09-10T16:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.