Analyzing Differentiable Fuzzy Implications
- URL: http://arxiv.org/abs/2006.03472v1
- Date: Thu, 4 Jun 2020 15:34:37 GMT
- Title: Analyzing Differentiable Fuzzy Implications
- Authors: Emile van Krieken, Erman Acar, Frank van Harmelen
- Abstract summary: We investigate how implications from the fuzzy logic literature behave in a differentiable setting.
It turns out that various fuzzy implications, including some of the most well-known, are highly unsuitable for use in a differentiable learning setting.
We introduce a new family of fuzzy implications (called sigmoidal implications) to tackle this phenomenon.
- Score: 3.4806267677524896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Combining symbolic and neural approaches has gained considerable attention in
the AI community, as it is often argued that the strengths and weaknesses of
these approaches are complementary. One such trend in the literature are weakly
supervised learning techniques that employ operators from fuzzy logics. In
particular, they use prior background knowledge described in such logics to
help the training of a neural network from unlabeled and noisy data. By
interpreting logical symbols using neural networks (or grounding them), this
background knowledge can be added to regular loss functions, hence making
reasoning a part of learning.
In this paper, we investigate how implications from the fuzzy logic
literature behave in a differentiable setting. In such a setting, we analyze
the differences between the formal properties of these fuzzy implications. It
turns out that various fuzzy implications, including some of the most
well-known, are highly unsuitable for use in a differentiable learning setting.
A further finding shows a strong imbalance between gradients driven by the
antecedent and the consequent of the implication. Furthermore, we introduce a
new family of fuzzy implications (called sigmoidal implications) to tackle this
phenomenon. Finally, we empirically show that it is possible to use
Differentiable Fuzzy Logics for semi-supervised learning, and show that
sigmoidal implications outperform other choices of fuzzy implications.
Related papers
- Three Pathways to Neurosymbolic Reinforcement Learning with
Interpretable Model and Policy Networks [4.242435932138821]
We study a class of neural networks that build interpretable semantics directly into their architecture.
We reveal and highlight both the potential and the essential difficulties of combining logic, simulation, and learning.
arXiv Detail & Related papers (2024-02-07T23:00:24Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Logic-induced Diagnostic Reasoning for Semi-supervised Semantic
Segmentation [85.12429517510311]
LogicDiag is a neural-logic semi-supervised learning framework for semantic segmentation.
Our key insight is that conflicts within pseudo labels, identified through symbolic knowledge, can serve as strong yet commonly ignored learning signals.
We showcase the practical application of LogicDiag in the data-hungry segmentation scenario, where we formalize the structured abstraction of semantic concepts as a set of logic rules.
arXiv Detail & Related papers (2023-08-24T06:50:07Z) - Knowledge-based Analogical Reasoning in Neuro-symbolic Latent Spaces [20.260546238369205]
We propose a framework that combines the pattern recognition abilities of neural networks with symbolic reasoning and background knowledge.
We take inspiration from the 'neural algorithmic reasoning' approach [DeepMind 2020] and use problem-specific background knowledge.
We test this on visual analogy problems in RAVENs Progressive Matrices, and achieve accuracy competitive with human performance.
arXiv Detail & Related papers (2022-09-19T04:03:20Z) - Reduced Implication-bias Logic Loss for Neuro-Symbolic Learning [11.343715006460577]
Differentiable operators could bring a significant bias during backpropagation and degrade the performance of Neuro-Symbolic learning.
We propose a simple yet effective method to transform the biased loss functions into textitReduced Implication-bias Logic Loss.
Empirical study shows that RILL can achieve significant improvements compared with the biased logic loss functions.
arXiv Detail & Related papers (2022-08-14T11:57:46Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Analyzing Differentiable Fuzzy Logic Operators [3.4806267677524896]
We study how a large collection of logical operators from the fuzzy logic literature behave in a differentiable learning setting.
We show that it is possible to use Differentiable Fuzzy Logics for semi-supervised learning, and compare how different operators behave in practice.
arXiv Detail & Related papers (2020-02-14T16:11:36Z) - T-Norms Driven Loss Functions for Machine Learning [19.569025323453257]
A class of neural-symbolic approaches is based on First-Order Logic to represent prior knowledge.
This paper shows that the loss function expressing these neural-symbolic learning tasks can be unambiguously determined.
arXiv Detail & Related papers (2019-07-26T10:22:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.