RNNCTPs: A Neural Symbolic Reasoning Method Using Dynamic Knowledge
Partitioning Technology
- URL: http://arxiv.org/abs/2204.08810v1
- Date: Tue, 19 Apr 2022 11:18:03 GMT
- Title: RNNCTPs: A Neural Symbolic Reasoning Method Using Dynamic Knowledge
Partitioning Technology
- Authors: Yu-hao Wu and Hou-biao Li
- Abstract summary: We propose a new neural symbolic reasoning method: RNNCTPs.
RNNCTPs improves computational efficiency by re-filtering the knowledge selection of Conditional Theorem Provers.
In all four datasets, the method shows competitive performance against traditional methods on the link prediction task.
- Score: 2.462063246087401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although traditional symbolic reasoning methods are highly interpretable,
their application in knowledge graph link prediction is limited due to their
low computational efficiency. In this paper, we propose a new neural symbolic
reasoning method: RNNCTPs, which improves computational efficiency by
re-filtering the knowledge selection of Conditional Theorem Provers (CTPs), and
is less sensitive to the embedding size parameter. RNNCTPs are divided into
relation selectors and predictors. The relation selectors are trained
efficiently and interpretably, so that the whole model can dynamically generate
knowledge for the inference of the predictor. In all four datasets, the method
shows competitive performance against traditional methods on the link
prediction task, and can have higher applicability to the selection of datasets
relative to CTPs.
Related papers
- An Interpretable Alternative to Neural Representation Learning for Rating Prediction -- Transparent Latent Class Modeling of User Reviews [8.392465185798713]
We present a transparent probabilistic model that organizes user and product latent classes based on the review information.
We evaluate our results in terms of both capacity for interpretability and predictive performances in comparison with popular text-based neural approaches.
arXiv Detail & Related papers (2024-06-17T07:07:42Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Neural Theorem Provers Delineating Search Area Using RNN [2.462063246087401]
A new RNNNTP method is proposed, using a generalized EM-based approach to continuously improve the computational efficiency of Neural Theorem Provers(NTPs)
The relation generator is trained effectively and interpretably, so that the whole model can be carried out according to the development of the training, and the computational efficiency is also greatly improved.
arXiv Detail & Related papers (2022-03-14T10:44:11Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Analytically Tractable Inference in Deep Neural Networks [0.0]
Tractable Approximate Inference (TAGI) algorithm was shown to be a viable and scalable alternative to backpropagation for shallow fully-connected neural networks.
We are demonstrating how TAGI matches or exceeds the performance of backpropagation, for training classic deep neural network architectures.
arXiv Detail & Related papers (2021-03-09T14:51:34Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z) - Learning Reasoning Strategies in End-to-End Differentiable Proving [50.9791149533921]
Conditional Theorem Provers learn optimal rule selection strategy via gradient-based optimisation.
We show that Conditional Theorem Provers are scalable and yield state-of-the-art results on the CLUTRR dataset.
arXiv Detail & Related papers (2020-07-13T16:22:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.