Emulating the Human Mind: A Neural-symbolic Link Prediction Model with
Fast and Slow Reasoning and Filtered Rules
- URL: http://arxiv.org/abs/2310.13996v1
- Date: Sat, 21 Oct 2023 12:45:11 GMT
- Title: Emulating the Human Mind: A Neural-symbolic Link Prediction Model with
Fast and Slow Reasoning and Filtered Rules
- Authors: Mohammad Hossein Khojasteh, Najmeh Torabian, Ali Farjami, Saeid
Hosseini, Behrouz Minaei-Bidgoli
- Abstract summary: We introduce a novel Neural-Symbolic model named FaSt-FLiP.
Our objective is to combine a logical and neural model for enhanced link prediction.
- Score: 4.979279893937017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Link prediction is an important task in addressing the incompleteness problem
of knowledge graphs (KG). Previous link prediction models suffer from issues
related to either performance or explanatory capability. Furthermore, models
that are capable of generating explanations, often struggle with erroneous
paths or reasoning leading to the correct answer. To address these challenges,
we introduce a novel Neural-Symbolic model named FaSt-FLiP (stands for Fast and
Slow Thinking with Filtered rules for Link Prediction task), inspired by two
distinct aspects of human cognition: "commonsense reasoning" and "thinking,
fast and slow." Our objective is to combine a logical and neural model for
enhanced link prediction. To tackle the challenge of dealing with incorrect
paths or rules generated by the logical model, we propose a semi-supervised
method to convert rules into sentences. These sentences are then subjected to
assessment and removal of incorrect rules using an NLI (Natural Language
Inference) model. Our approach to combining logical and neural models involves
first obtaining answers from both the logical and neural models. These answers
are subsequently unified using an Inference Engine module, which has been
realized through both algorithmic implementation and a novel neural model
architecture. To validate the efficacy of our model, we conducted a series of
experiments. The results demonstrate the superior performance of our model in
both link prediction metrics and the generation of more reliable explanations.
Related papers
- Longer Fixations, More Computation: Gaze-Guided Recurrent Neural
Networks [12.57650361978445]
Humans read texts at a varying pace, while machine learning models treat each token in the same way.
In this paper, we convert this intuition into a set of novel models with fixation-guided parallel RNNs or layers.
We find that, interestingly, the fixation duration predicted by neural networks bears some resemblance to humans' fixation.
arXiv Detail & Related papers (2023-10-31T21:32:11Z) - A Self-Adaptive Penalty Method for Integrating Prior Knowledge
Constraints into Neural ODEs [3.072340427031969]
We propose a self-adaptive penalty algorithm for Neural ODEs to enable modelling of constrained natural systems.
We validate the proposed approach by modelling three natural systems with prior knowledge constraints.
The self-adaptive penalty approach provides more accurate and robust models with reliable and meaningful predictions.
arXiv Detail & Related papers (2023-07-27T15:32:02Z) - Faithfulness Tests for Natural Language Explanations [87.01093277918599]
Explanations of neural models aim to reveal a model's decision-making process for its predictions.
Recent work shows that current methods giving explanations such as saliency maps or counterfactuals can be misleading.
This work explores the challenging question of evaluating the faithfulness of natural language explanations.
arXiv Detail & Related papers (2023-05-29T11:40:37Z) - NELLIE: A Neuro-Symbolic Inference Engine for Grounded, Compositional, and Explainable Reasoning [59.16962123636579]
This paper proposes a new take on Prolog-based inference engines.
We replace handcrafted rules with a combination of neural language modeling, guided generation, and semi dense retrieval.
Our implementation, NELLIE, is the first system to demonstrate fully interpretable, end-to-end grounded QA.
arXiv Detail & Related papers (2022-09-16T00:54:44Z) - Improving Coherence and Consistency in Neural Sequence Models with
Dual-System, Neuro-Symbolic Reasoning [49.6928533575956]
We use neural inference to mediate between the neural System 1 and the logical System 2.
Results in robust story generation and grounded instruction-following show that this approach can increase the coherence and accuracy of neurally-based generations.
arXiv Detail & Related papers (2021-07-06T17:59:49Z) - Neural Unsupervised Semantic Role Labeling [48.69930912510414]
We present the first neural unsupervised model for semantic role labeling.
We decompose the task as two argument related subtasks, identification and clustering.
Experiments on CoNLL-2009 English dataset demonstrate that our model outperforms previous state-of-the-art baseline.
arXiv Detail & Related papers (2021-04-19T04:50:16Z) - Learning to Rationalize for Nonmonotonic Reasoning with Distant
Supervision [44.32874972577682]
We investigate the extent to which neural models can reason about natural language rationales that explain model predictions.
We use pre-trained language models, neural knowledge models, and distant supervision from related tasks.
Our model shows promises at generating post-hoc rationales explaining why an inference is more or less likely given the additional information.
arXiv Detail & Related papers (2020-12-14T23:50:20Z) - Exploring End-to-End Differentiable Natural Logic Modeling [21.994060519995855]
We explore end-to-end trained differentiable models that integrate natural logic with neural networks.
The proposed model adapts module networks to model natural logic operations, which is enhanced with a memory component to model contextual information.
arXiv Detail & Related papers (2020-11-08T18:18:15Z) - Understanding Neural Abstractive Summarization Models via Uncertainty [54.37665950633147]
seq2seq abstractive summarization models generate text in a free-form manner.
We study the entropy, or uncertainty, of the model's token-level predictions.
We show that uncertainty is a useful perspective for analyzing summarization and text generation models more broadly.
arXiv Detail & Related papers (2020-10-15T16:57:27Z) - Multi-Step Inference for Reasoning Over Paragraphs [95.91527524872832]
Complex reasoning over text requires understanding and chaining together free-form predicates and logical connectives.
We present a compositional model reminiscent of neural module networks that can perform chained logical reasoning.
arXiv Detail & Related papers (2020-04-06T21:12:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.