Logic Diffusion for Knowledge Graph Reasoning
- URL: http://arxiv.org/abs/2306.03515v1
- Date: Tue, 6 Jun 2023 09:01:17 GMT
- Title: Logic Diffusion for Knowledge Graph Reasoning
- Authors: Xiaoying Xie, Biao Gong, Yiliang Lv, Zhen Han, Guoshuai Zhao, Xueming
Qian
- Abstract summary: We propose a plug-in module called Logic Diffusion (LoD) to discover unseen queries from surroundings.
LoD achieves dynamical equilibrium between different kinds of patterns.
Experiments on four public datasets demonstrate the superiority of mainstream knowledge graph reasoning models with LoD over state-of-the-art.
- Score: 29.260922651325412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most recent works focus on answering first order logical queries to explore
the knowledge graph reasoning via multi-hop logic predictions. However,
existing reasoning models are limited by the circumscribed logical paradigms of
training samples, which leads to a weak generalization of unseen logic. To
address these issues, we propose a plug-in module called Logic Diffusion (LoD)
to discover unseen queries from surroundings and achieves dynamical equilibrium
between different kinds of patterns. The basic idea of LoD is relation
diffusion and sampling sub-logic by random walking as well as a special
training mechanism called gradient adaption. Besides, LoD is accompanied by a
novel loss function to further achieve the robust logical diffusion when facing
noisy data in training or testing sets. Extensive experiments on four public
datasets demonstrate the superiority of mainstream knowledge graph reasoning
models with LoD over state-of-the-art. Moreover, our ablation study proves the
general effectiveness of LoD on the noise-rich knowledge graph.
Related papers
- Neural Probabilistic Logic Learning for Knowledge Graph Reasoning [10.473897846826956]
This paper aims to design a reasoning framework that achieves accurate reasoning on knowledge graphs.
We introduce a scoring module that effectively enhances the expressive power of embedding networks.
We improve the interpretability of the model by incorporating a Markov Logic Network based on variational inference.
arXiv Detail & Related papers (2024-07-04T07:45:46Z) - PathReasoner: Modeling Reasoning Path with Equivalent Extension for Logical Question Answering [27.50008553118866]
We model the logical reasoning task by transforming each logical sample into reasoning paths.
To expand the diversity of the logical samples, we propose an atom extension strategy supported by equivalent logical formulas.
Experiments show that PathReasoner achieves competitive performances on two logical reasoning benchmarks and great generalization abilities.
arXiv Detail & Related papers (2024-05-29T14:14:05Z) - Understanding Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation [110.71955853831707]
We view LMs as deriving new conclusions by aggregating indirect reasoning paths seen at pre-training time.
We formalize the reasoning paths as random walk paths on the knowledge/reasoning graphs.
Experiments and analysis on multiple KG and CoT datasets reveal the effect of training on random walk paths.
arXiv Detail & Related papers (2024-02-05T18:25:51Z) - DetermLR: Augmenting LLM-based Logical Reasoning from Indeterminacy to Determinacy [76.58614128865652]
We propose DetermLR, a novel perspective that rethinks the reasoning process as an evolution from indeterminacy to determinacy.
First, we categorize known conditions into two types: determinate and indeterminate premises This provides an oveall direction for the reasoning process and guides LLMs in converting indeterminate data into progressively determinate insights.
We automate the storage and extraction of available premises and reasoning paths with reasoning memory, preserving historical reasoning details for subsequent reasoning steps.
arXiv Detail & Related papers (2023-10-28T10:05:51Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Logic-induced Diagnostic Reasoning for Semi-supervised Semantic
Segmentation [85.12429517510311]
LogicDiag is a neural-logic semi-supervised learning framework for semantic segmentation.
Our key insight is that conflicts within pseudo labels, identified through symbolic knowledge, can serve as strong yet commonly ignored learning signals.
We showcase the practical application of LogicDiag in the data-hungry segmentation scenario, where we formalize the structured abstraction of semantic concepts as a set of logic rules.
arXiv Detail & Related papers (2023-08-24T06:50:07Z) - Dual Box Embeddings for the Description Logic EL++ [16.70961576041243]
Similar to Knowledge Graphs (KGs), Knowledge Graphs are often incomplete, and maintaining and constructing them has proved challenging.
Similar to KGs, a promising approach is to learn embeddings in a latent vector space, while additionally ensuring they adhere to the semantics of the underlying DL.
We propose a novel ontology embedding method named Box$2$EL for the DL EL++, which represents both concepts and roles as boxes.
arXiv Detail & Related papers (2023-01-26T14:13:37Z) - Visual Abductive Reasoning [85.17040703205608]
Abductive reasoning seeks the likeliest possible explanation for partial observations.
We propose a new task and dataset, Visual Abductive Reasoning ( VAR), for examining abductive reasoning ability of machine intelligence in everyday visual situations.
arXiv Detail & Related papers (2022-03-26T10:17:03Z) - Neural Logic Reasoning [47.622957656745356]
We propose Logic-Integrated Neural Network (LINN) to integrate the power of deep learning and logic reasoning.
LINN learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the network for inference.
Experiments show that LINN significantly outperforms state-of-the-art recommendation models in Top-K recommendation.
arXiv Detail & Related papers (2020-08-20T14:53:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.