Continual Reasoning: Non-Monotonic Reasoning in Neurosymbolic AI using
Continual Learning
- URL: http://arxiv.org/abs/2305.02171v1
- Date: Wed, 3 May 2023 15:11:34 GMT
- Title: Continual Reasoning: Non-Monotonic Reasoning in Neurosymbolic AI using
Continual Learning
- Authors: Sofoklis Kyriakopoulos, Artur S. d'Avila Garcez
- Abstract summary: We show that by combining a neural-symbolic system with methods from continual learning, Logic Networks can obtain a higher level of accuracy.
Continual learning is added to LTNs by adopting a curriculum of learning from knowledge and data with recall.
Results indicate significant improvement on the non-monotonic reasoning problem.
- Score: 2.912595438026074
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Despite the extensive investment and impressive recent progress at reasoning
by similarity, deep learning continues to struggle with more complex forms of
reasoning such as non-monotonic and commonsense reasoning. Non-monotonicity is
a property of non-classical reasoning typically seen in commonsense reasoning,
whereby a reasoning system is allowed (differently from classical logic) to
jump to conclusions which may be retracted later, when new information becomes
available. Neural-symbolic systems such as Logic Tensor Networks (LTN) have
been shown to be effective at enabling deep neural networks to achieve
reasoning capabilities. In this paper, we show that by combining a
neural-symbolic system with methods from continual learning, LTN can obtain a
higher level of accuracy when addressing non-monotonic reasoning tasks.
Continual learning is added to LTNs by adopting a curriculum of learning from
knowledge and data with recall. We call this process Continual Reasoning, a new
methodology for the application of neural-symbolic systems to reasoning tasks.
Continual Reasoning is applied to a prototypical non-monotonic reasoning
problem as well as other reasoning examples. Experimentation is conducted to
compare and analyze the effects that different curriculum choices may have on
overall learning and reasoning results. Results indicate significant
improvement on the prototypical non-monotonic reasoning problem and a promising
outlook for the proposed approach on statistical relational learning examples.
Related papers
- Neuro-symbolic Learning Yielding Logical Constraints [22.649543443988712]
end-to-end learning of neuro-symbolic systems is still an unsolved challenge.
We propose a framework that fuses the network, symbol grounding, and logical constraint synthesisto-end learning process.
arXiv Detail & Related papers (2024-10-28T12:18:25Z) - Neural Probabilistic Logic Learning for Knowledge Graph Reasoning [10.473897846826956]
This paper aims to design a reasoning framework that achieves accurate reasoning on knowledge graphs.
We introduce a scoring module that effectively enhances the expressive power of embedding networks.
We improve the interpretability of the model by incorporating a Markov Logic Network based on variational inference.
arXiv Detail & Related papers (2024-07-04T07:45:46Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Logic-induced Diagnostic Reasoning for Semi-supervised Semantic
Segmentation [85.12429517510311]
LogicDiag is a neural-logic semi-supervised learning framework for semantic segmentation.
Our key insight is that conflicts within pseudo labels, identified through symbolic knowledge, can serve as strong yet commonly ignored learning signals.
We showcase the practical application of LogicDiag in the data-hungry segmentation scenario, where we formalize the structured abstraction of semantic concepts as a set of logic rules.
arXiv Detail & Related papers (2023-08-24T06:50:07Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Improving Coherence and Consistency in Neural Sequence Models with
Dual-System, Neuro-Symbolic Reasoning [49.6928533575956]
We use neural inference to mediate between the neural System 1 and the logical System 2.
Results in robust story generation and grounded instruction-following show that this approach can increase the coherence and accuracy of neurally-based generations.
arXiv Detail & Related papers (2021-07-06T17:59:49Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Neural Logic Reasoning [47.622957656745356]
We propose Logic-Integrated Neural Network (LINN) to integrate the power of deep learning and logic reasoning.
LINN learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the network for inference.
Experiments show that LINN significantly outperforms state-of-the-art recommendation models in Top-K recommendation.
arXiv Detail & Related papers (2020-08-20T14:53:23Z) - Relational Neural Machines [19.569025323453257]
This paper presents a novel framework allowing jointly train the parameters of the learners and of a First-Order Logic based reasoner.
A Neural Machine is able recover both classical learning results in case of pure sub-symbolic learning, and Markov Logic Networks.
Proper algorithmic solutions are devised to make learning and inference tractable in large-scale problems.
arXiv Detail & Related papers (2020-02-06T10:53:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.