Neural-Symbolic Commonsense Reasoner with Relation Predictors
- URL: http://arxiv.org/abs/2105.06717v1
- Date: Fri, 14 May 2021 08:54:25 GMT
- Title: Neural-Symbolic Commonsense Reasoner with Relation Predictors
- Authors: Farhad Moghimifar, Lizhen Qu, Yue Zhuo, Gholamreza Haffari, Mahsa
Baktashmotlagh
- Abstract summary: Commonsense reasoning aims to incorporate sets of commonsense facts, retrieved from Commonsense Knowledge Graphs (CKG), to draw conclusion about ordinary situations.
This feature also results in having large-scale sparse Knowledge Graphs, where such reasoning process is needed to predict relations between new events.
We present a neural-symbolic reasoner, which is capable of reasoning over large-scale dynamic CKGs.
- Score: 36.03049905851874
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Commonsense reasoning aims to incorporate sets of commonsense facts,
retrieved from Commonsense Knowledge Graphs (CKG), to draw conclusion about
ordinary situations. The dynamic nature of commonsense knowledge postulates
models capable of performing multi-hop reasoning over new situations. This
feature also results in having large-scale sparse Knowledge Graphs, where such
reasoning process is needed to predict relations between new events. However,
existing approaches in this area are limited by considering CKGs as a limited
set of facts, thus rendering them unfit for reasoning over new unseen
situations and events. In this paper, we present a neural-symbolic reasoner,
which is capable of reasoning over large-scale dynamic CKGs. The logic rules
for reasoning over CKGs are learned during training by our model. In addition
to providing interpretable explanation, the learned logic rules help to
generalise prediction to newly introduced events. Experimental results on the
task of link prediction on CKGs prove the effectiveness of our model by
outperforming the state-of-the-art models.
Related papers
- Neural Probabilistic Logic Learning for Knowledge Graph Reasoning [10.473897846826956]
This paper aims to design a reasoning framework that achieves accurate reasoning on knowledge graphs.
We introduce a scoring module that effectively enhances the expressive power of embedding networks.
We improve the interpretability of the model by incorporating a Markov Logic Network based on variational inference.
arXiv Detail & Related papers (2024-07-04T07:45:46Z) - Selective Temporal Knowledge Graph Reasoning [70.11788354442218]
Temporal Knowledge Graph (TKG) aims to predict future facts based on given historical ones.
Existing TKG reasoning models are unable to abstain from predictions they are uncertain.
We propose an abstention mechanism for TKG reasoning, which helps the existing models make selective, instead of indiscriminate, predictions.
arXiv Detail & Related papers (2024-04-02T06:56:21Z) - Advancing Abductive Reasoning in Knowledge Graphs through Complex Logical Hypothesis Generation [43.26412690886471]
This paper introduces the task of complex logical hypothesis generation, as an initial step towards abductive logical reasoning with Knowledge Graph.
We find that the supervised trained generative model can generate logical hypotheses that are structurally closer to the reference hypothesis.
We introduce the Reinforcement Learning from Knowledge Graph (RLF-KG) method, which minimizes differences between observations and conclusions drawn from generated hypotheses according to the KG.
arXiv Detail & Related papers (2023-12-25T08:06:20Z) - DREAM: Adaptive Reinforcement Learning based on Attention Mechanism for
Temporal Knowledge Graph Reasoning [46.16322824448241]
We propose an adaptive reinforcement learning model based on attention mechanism (DREAM) to predict missing elements in the future.
Experimental results demonstrate DREAM outperforms state-of-the-art models on public dataset.
arXiv Detail & Related papers (2023-04-08T10:57:37Z) - Logic and Commonsense-Guided Temporal Knowledge Graph Completion [9.868206060374991]
A temporal knowledge graph (TKG) stores the events derived from the data involving time.
We propose a Logic and Commonsense-Guided Embedding model (LCGE) to jointly learn the time-sensitive representation involving timeliness and causality of events.
arXiv Detail & Related papers (2022-11-30T10:06:55Z) - Effect Identification in Cluster Causal Diagrams [51.42809552422494]
We introduce a new type of graphical model called cluster causal diagrams (for short, C-DAGs)
C-DAGs allow for the partial specification of relationships among variables based on limited prior knowledge.
We develop the foundations and machinery for valid causal inferences over C-DAGs.
arXiv Detail & Related papers (2022-02-22T21:27:31Z) - Relating Graph Neural Networks to Structural Causal Models [17.276657786213015]
Causality can be described in terms of a structural causal model (SCM) that carries information on the variables of interest and their mechanistic relations.
We present a theoretical analysis that establishes a novel connection between GNN and SCM.
We then establish a new model class for GNN-based causal inference that is necessary and sufficient for causal effect identification.
arXiv Detail & Related papers (2021-09-09T11:16:31Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Exploring the Limits of Few-Shot Link Prediction in Knowledge Graphs [49.6661602019124]
We study a spectrum of models derived by generalizing the current state of the art for few-shot link prediction.
We find that a simple zero-shot baseline - which ignores any relation-specific information - achieves surprisingly strong performance.
Experiments on carefully crafted synthetic datasets show that having only a few examples of a relation fundamentally limits models from using fine-grained structural information.
arXiv Detail & Related papers (2021-02-05T21:04:31Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.