Neural Theorem Provers Delineating Search Area Using RNN
- URL: http://arxiv.org/abs/2203.06985v1
- Date: Mon, 14 Mar 2022 10:44:11 GMT
- Title: Neural Theorem Provers Delineating Search Area Using RNN
- Authors: Yu-hao Wu and Hou-biao Li
- Abstract summary: A new RNNNTP method is proposed, using a generalized EM-based approach to continuously improve the computational efficiency of Neural Theorem Provers(NTPs)
The relation generator is trained effectively and interpretably, so that the whole model can be carried out according to the development of the training, and the computational efficiency is also greatly improved.
- Score: 2.462063246087401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although traditional symbolic reasoning methods are highly interpretable,
their application in knowledge graphs link prediction has been limited due to
their computational inefficiency. A new RNNNTP method is proposed in this
paper, using a generalized EM-based approach to continuously improve the
computational efficiency of Neural Theorem Provers(NTPs). The RNNNTP is divided
into relation generator and predictor. The relation generator is trained
effectively and interpretably, so that the whole model can be carried out
according to the development of the training, and the computational efficiency
is also greatly improved. In all four data-sets, this method shows competitive
performance on the link prediction task relative to traditional methods as well
as one of the current strong competitive methods.
Related papers
- Neural Tangent Kernels Motivate Graph Neural Networks with
Cross-Covariance Graphs [94.44374472696272]
We investigate NTKs and alignment in the context of graph neural networks (GNNs)
Our results establish the theoretical guarantees on the optimality of the alignment for a two-layer GNN.
These guarantees are characterized by the graph shift operator being a function of the cross-covariance between the input and the output data.
arXiv Detail & Related papers (2023-10-16T19:54:21Z) - Distilling Knowledge from Resource Management Algorithms to Neural
Networks: A Unified Training Assistance Approach [18.841969905928337]
knowledge distillation (KD) based algorithm distillation (AD) method is proposed in this paper to improve the performance and convergence speed of the NN-based method.
This research paves the way for the integration of traditional optimization insights and emerging NN techniques in wireless communication system optimization.
arXiv Detail & Related papers (2023-08-15T00:30:58Z) - Making Linear MDPs Practical via Contrastive Representation Learning [101.75885788118131]
It is common to address the curse of dimensionality in Markov decision processes (MDPs) by exploiting low-rank representations.
We consider an alternative definition of linear MDPs that automatically ensures normalization while allowing efficient representation learning.
We demonstrate superior performance over existing state-of-the-art model-based and model-free algorithms on several benchmarks.
arXiv Detail & Related papers (2022-07-14T18:18:02Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - RNNCTPs: A Neural Symbolic Reasoning Method Using Dynamic Knowledge
Partitioning Technology [2.462063246087401]
We propose a new neural symbolic reasoning method: RNNCTPs.
RNNCTPs improves computational efficiency by re-filtering the knowledge selection of Conditional Theorem Provers.
In all four datasets, the method shows competitive performance against traditional methods on the link prediction task.
arXiv Detail & Related papers (2022-04-19T11:18:03Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Learning to Guide a Saturation-Based Theorem Prover [9.228237801323042]
TRAIL is a deep learning-based approach to theorem proving that characterizes core elements of saturation-based theorem proving within a neural framework.
To the best of our knowledge, TRAIL is the first reinforcement learning-based approach to exceed the performance of a state-of-the-art traditional theorem prover.
arXiv Detail & Related papers (2021-06-07T18:35:57Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - A Lagrangian Dual-based Theory-guided Deep Neural Network [0.0]
The Lagrangian dual-based TgNN (TgNN-LD) is proposed to improve the effectiveness of TgNN.
Experimental results demonstrate the superiority of the Lagrangian dual-based TgNN.
arXiv Detail & Related papers (2020-08-24T02:06:19Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.