Reduced Implication-bias Logic Loss for Neuro-Symbolic Learning
- URL: http://arxiv.org/abs/2208.06838v2
- Date: Mon, 25 Sep 2023 10:26:35 GMT
- Title: Reduced Implication-bias Logic Loss for Neuro-Symbolic Learning
- Authors: Haoyuan He, Wangzhou Dai, Ming Li
- Abstract summary: Differentiable operators could bring a significant bias during backpropagation and degrade the performance of Neuro-Symbolic learning.
We propose a simple yet effective method to transform the biased loss functions into textitReduced Implication-bias Logic Loss.
Empirical study shows that RILL can achieve significant improvements compared with the biased logic loss functions.
- Score: 11.343715006460577
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Integrating logical reasoning and machine learning by approximating logical
inference with differentiable operators is a widely used technique in
Neuro-Symbolic systems.
However, some differentiable operators could bring a significant bias during
backpropagation and degrade the performance of Neuro-Symbolic learning.
In this paper, we reveal that this bias, named \textit{Implication Bias} is
common in loss functions derived from fuzzy logic operators.
Furthermore, we propose a simple yet effective method to transform the biased
loss functions into \textit{Reduced Implication-bias Logic Loss (RILL)} to
address the above problem.
Empirical study shows that RILL can achieve significant improvements compared
with the biased logic loss functions, especially when the knowledge base is
incomplete, and keeps more robust than the compared methods when labelled data
is insufficient.
Related papers
- Multi-modal Causal Structure Learning and Root Cause Analysis [67.67578590390907]
We propose Mulan, a unified multi-modal causal structure learning method for root cause localization.
We leverage a log-tailored language model to facilitate log representation learning, converting log sequences into time-series data.
We also introduce a novel key performance indicator-aware attention mechanism for assessing modality reliability and co-learning a final causal graph.
arXiv Detail & Related papers (2024-02-04T05:50:38Z) - On the Dynamics Under the Unhinged Loss and Beyond [104.49565602940699]
We introduce the unhinged loss, a concise loss function, that offers more mathematical opportunities to analyze closed-form dynamics.
The unhinged loss allows for considering more practical techniques, such as time-vary learning rates and feature normalization.
arXiv Detail & Related papers (2023-12-13T02:11:07Z) - Logic-induced Diagnostic Reasoning for Semi-supervised Semantic
Segmentation [85.12429517510311]
LogicDiag is a neural-logic semi-supervised learning framework for semantic segmentation.
Our key insight is that conflicts within pseudo labels, identified through symbolic knowledge, can serve as strong yet commonly ignored learning signals.
We showcase the practical application of LogicDiag in the data-hungry segmentation scenario, where we formalize the structured abstraction of semantic concepts as a set of logic rules.
arXiv Detail & Related papers (2023-08-24T06:50:07Z) - Refining neural network predictions using background knowledge [68.35246878394702]
We show we can use logical background knowledge in learning system to compensate for a lack of labeled training data.
We introduce differentiable refinement functions that find a corrected prediction close to the original prediction.
This algorithm finds optimal refinements on complex SAT formulas in significantly fewer iterations and frequently finds solutions where gradient descent can not.
arXiv Detail & Related papers (2022-06-10T10:17:59Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Evaluating Relaxations of Logic for Neural Networks: A Comprehensive
Study [17.998891912502092]
We study the question of how best to relax logical expressions that represent labeled examples and knowledge about a problem.
We present theoretical and empirical criteria for characterizing which relaxation would perform best in various scenarios.
arXiv Detail & Related papers (2021-07-28T21:16:58Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z) - Analyzing Differentiable Fuzzy Implications [3.4806267677524896]
We investigate how implications from the fuzzy logic literature behave in a differentiable setting.
It turns out that various fuzzy implications, including some of the most well-known, are highly unsuitable for use in a differentiable learning setting.
We introduce a new family of fuzzy implications (called sigmoidal implications) to tackle this phenomenon.
arXiv Detail & Related papers (2020-06-04T15:34:37Z) - Analyzing Differentiable Fuzzy Logic Operators [3.4806267677524896]
We study how a large collection of logical operators from the fuzzy logic literature behave in a differentiable learning setting.
We show that it is possible to use Differentiable Fuzzy Logics for semi-supervised learning, and compare how different operators behave in practice.
arXiv Detail & Related papers (2020-02-14T16:11:36Z) - T-Norms Driven Loss Functions for Machine Learning [19.569025323453257]
A class of neural-symbolic approaches is based on First-Order Logic to represent prior knowledge.
This paper shows that the loss function expressing these neural-symbolic learning tasks can be unambiguously determined.
arXiv Detail & Related papers (2019-07-26T10:22:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.