logLTN: Differentiable Fuzzy Logic in the Logarithm Space
- URL: http://arxiv.org/abs/2306.14546v1
- Date: Mon, 26 Jun 2023 09:39:05 GMT
- Title: logLTN: Differentiable Fuzzy Logic in the Logarithm Space
- Authors: Samy Badreddine, Luciano Serafini, Michael Spranger
- Abstract summary: A trend in the literature involves integrating axioms and facts in loss functions by grounding logical symbols with neural networks and fuzzy semantics.
This paper presents a configuration of fuzzy operators for grounding formulas end-to-end in the logarithm space.
Our findings, both formal and empirical, show that the proposed configuration outperforms the state-of-the-art.
- Score: 11.440949097704943
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The AI community is increasingly focused on merging logic with deep learning
to create Neuro-Symbolic (NeSy) paradigms and assist neural approaches with
symbolic knowledge. A significant trend in the literature involves integrating
axioms and facts in loss functions by grounding logical symbols with neural
networks and operators with fuzzy semantics. Logic Tensor Networks (LTN) is one
of the main representatives in this category, known for its simplicity,
efficiency, and versatility. However, it has been previously shown that not all
fuzzy operators perform equally when applied in a differentiable setting.
Researchers have proposed several configurations of operators, trading off
between effectiveness, numerical stability, and generalization to different
formulas. This paper presents a configuration of fuzzy operators for grounding
formulas end-to-end in the logarithm space. Our goal is to develop a
configuration that is more effective than previous proposals, able to handle
any formula, and numerically stable. To achieve this, we propose semantics that
are best suited for the logarithm space and introduce novel simplifications and
improvements that are crucial for optimization via gradient-descent. We use LTN
as the framework for our experiments, but the conclusions of our work apply to
any similar NeSy framework. Our findings, both formal and empirical, show that
the proposed configuration outperforms the state-of-the-art and that each of
our modifications is essential in achieving these results.
Related papers
- Component Fourier Neural Operator for Singularly Perturbed Differential Equations [3.9482103923304877]
Solving Singularly Perturbed Differential Equations (SPDEs) poses computational challenges arising from the rapid transitions in their solutions within thin regions.
In this manuscript, we introduce Component Fourier Neural Operator (ComFNO), an innovative operator learning method that builds upon Fourier Neural Operator (FNO)
Our approach is not limited to FNO and can be applied to other neural network frameworks, such as Deep Operator Network (DeepONet)
arXiv Detail & Related papers (2024-09-07T09:40:51Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - PROTOtypical Logic Tensor Networks (PROTO-LTN) for Zero Shot Learning [2.236663830879273]
Logic Networks (LTNs) are neuro-symbolic systems based on a differentiable, first-order logic grounded into a deep neural network.
We focus here on the subsumption or textttisOfClass predicate, which is fundamental to encode most semantic image interpretation tasks.
We propose a common textttisOfClass predicate, whose level of truth is a function of the distance between an object embedding and the corresponding class prototype.
arXiv Detail & Related papers (2022-06-26T18:34:07Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Neural Logic Reasoning [47.622957656745356]
We propose Logic-Integrated Neural Network (LINN) to integrate the power of deep learning and logic reasoning.
LINN learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the network for inference.
Experiments show that LINN significantly outperforms state-of-the-art recommendation models in Top-K recommendation.
arXiv Detail & Related papers (2020-08-20T14:53:23Z) - iNALU: Improved Neural Arithmetic Logic Unit [2.331160520377439]
The recently proposed Neural Arithmetic Logic Unit (NALU) is a novel neural architecture which is able to explicitly represent the mathematical relationships by the units of the network to learn operations such as summation, subtraction or multiplication.
We show that our model solves stability issues and outperforms the original NALU model in means of arithmetic precision and convergence.
arXiv Detail & Related papers (2020-03-17T10:37:22Z) - Analyzing Differentiable Fuzzy Logic Operators [3.4806267677524896]
We study how a large collection of logical operators from the fuzzy logic literature behave in a differentiable learning setting.
We show that it is possible to use Differentiable Fuzzy Logics for semi-supervised learning, and compare how different operators behave in practice.
arXiv Detail & Related papers (2020-02-14T16:11:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.