Analyzing Differentiable Fuzzy Logic Operators
- URL: http://arxiv.org/abs/2002.06100v2
- Date: Tue, 24 Aug 2021 08:25:41 GMT
- Title: Analyzing Differentiable Fuzzy Logic Operators
- Authors: Emile van Krieken, Erman Acar, Frank van Harmelen
- Abstract summary: We study how a large collection of logical operators from the fuzzy logic literature behave in a differentiable learning setting.
We show that it is possible to use Differentiable Fuzzy Logics for semi-supervised learning, and compare how different operators behave in practice.
- Score: 3.4806267677524896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The AI community is increasingly putting its attention towards combining
symbolic and neural approaches, as it is often argued that the strengths and
weaknesses of these approaches are complementary. One recent trend in the
literature are weakly supervised learning techniques that employ operators from
fuzzy logics. In particular, these use prior background knowledge described in
such logics to help the training of a neural network from unlabeled and noisy
data. By interpreting logical symbols using neural networks, this background
knowledge can be added to regular loss functions, hence making reasoning a part
of learning. We study, both formally and empirically, how a large collection of
logical operators from the fuzzy logic literature behave in a differentiable
learning setting. We find that many of these operators, including some of the
most well-known, are highly unsuitable in this setting. A further finding
concerns the treatment of implication in these fuzzy logics, and shows a strong
imbalance between gradients driven by the antecedent and the consequent of the
implication. Furthermore, we introduce a new family of fuzzy implications
(called sigmoidal implications) to tackle this phenomenon. Finally, we
empirically show that it is possible to use Differentiable Fuzzy Logics for
semi-supervised learning, and compare how different operators behave in
practice. We find that, to achieve the largest performance improvement over a
supervised baseline, we have to resort to non-standard combinations of logical
operators which perform well in learning, but no longer satisfy the usual
logical laws.
Related papers
- LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Logic-induced Diagnostic Reasoning for Semi-supervised Semantic
Segmentation [85.12429517510311]
LogicDiag is a neural-logic semi-supervised learning framework for semantic segmentation.
Our key insight is that conflicts within pseudo labels, identified through symbolic knowledge, can serve as strong yet commonly ignored learning signals.
We showcase the practical application of LogicDiag in the data-hungry segmentation scenario, where we formalize the structured abstraction of semantic concepts as a set of logic rules.
arXiv Detail & Related papers (2023-08-24T06:50:07Z) - Join-Chain Network: A Logical Reasoning View of the Multi-head Attention
in Transformer [59.73454783958702]
We propose a symbolic reasoning architecture that chains many join operators together to model output logical expressions.
In particular, we demonstrate that such an ensemble of join-chains can express a broad subset of ''tree-structured'' first-order logical expressions, named FOET.
We find that the widely used multi-head self-attention module in transformer can be understood as a special neural operator that implements the union bound of the join operator in probabilistic predicate space.
arXiv Detail & Related papers (2022-10-06T07:39:58Z) - Reduced Implication-bias Logic Loss for Neuro-Symbolic Learning [11.343715006460577]
Differentiable operators could bring a significant bias during backpropagation and degrade the performance of Neuro-Symbolic learning.
We propose a simple yet effective method to transform the biased loss functions into textitReduced Implication-bias Logic Loss.
Empirical study shows that RILL can achieve significant improvements compared with the biased logic loss functions.
arXiv Detail & Related papers (2022-08-14T11:57:46Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Evaluating Relaxations of Logic for Neural Networks: A Comprehensive
Study [17.998891912502092]
We study the question of how best to relax logical expressions that represent labeled examples and knowledge about a problem.
We present theoretical and empirical criteria for characterizing which relaxation would perform best in various scenarios.
arXiv Detail & Related papers (2021-07-28T21:16:58Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Neural Logic Reasoning [47.622957656745356]
We propose Logic-Integrated Neural Network (LINN) to integrate the power of deep learning and logic reasoning.
LINN learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the network for inference.
Experiments show that LINN significantly outperforms state-of-the-art recommendation models in Top-K recommendation.
arXiv Detail & Related papers (2020-08-20T14:53:23Z) - Analyzing Differentiable Fuzzy Implications [3.4806267677524896]
We investigate how implications from the fuzzy logic literature behave in a differentiable setting.
It turns out that various fuzzy implications, including some of the most well-known, are highly unsuitable for use in a differentiable learning setting.
We introduce a new family of fuzzy implications (called sigmoidal implications) to tackle this phenomenon.
arXiv Detail & Related papers (2020-06-04T15:34:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.