JointLK: Joint Reasoning with Language Models and Knowledge Graphs for
Commonsense Question Answering
- URL: http://arxiv.org/abs/2112.02732v1
- Date: Mon, 6 Dec 2021 01:46:46 GMT
- Title: JointLK: Joint Reasoning with Language Models and Knowledge Graphs for
Commonsense Question Answering
- Authors: Yueqing Sun, Qi Shi, Le Qi, Yu Zhang
- Abstract summary: Existing KG-augmented models for question answering primarily focus on designing elaborate Graph Neural Networks (GNNs) to model knowledge graphs (KGs)
We propose a novel model, JointLK, which solves the above limitations through the joint reasoning of LMs and GNNs and the dynamic KGs pruning mechanism.
Our results on the CommonsenseQA and OpenBookQA datasets demonstrate that our modal fusion and knowledge pruning methods can make better use of relevant knowledge for reasoning.
- Score: 3.7948783125888363
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing KG-augmented models for question answering primarily focus on
designing elaborate Graph Neural Networks (GNNs) to model knowledge graphs
(KGs). However, they ignore (i) the effectively fusing and reasoning over
question context representations and the KG representations, and (ii)
automatically selecting relevant nodes from the noisy KGs during reasoning. In
this paper, we propose a novel model, JointLK, which solves the above
limitations through the joint reasoning of LMs and GNNs and the dynamic KGs
pruning mechanism. Specifically, JointLK performs joint reasoning between the
LMs and the GNNs through a novel dense bidirectional attention module, in which
each question token attends on KG nodes and each KG node attends on question
tokens, and the two modal representations fuse and update mutually by
multi-step interactions. Then, the dynamic pruning module uses the attention
weights generated by joint reasoning to recursively prune irrelevant KG nodes.
Our results on the CommonsenseQA and OpenBookQA datasets demonstrate that our
modal fusion and knowledge pruning methods can make better use of relevant
knowledge for reasoning.
Related papers
- A Prompt-Based Knowledge Graph Foundation Model for Universal In-Context Reasoning [17.676185326247946]
We propose a prompt-based KG foundation model via in-context learning, namely KG-ICL, to achieve a universal reasoning ability.
To encode prompt graphs with the generalization ability to unseen entities and relations in queries, we first propose a unified tokenizer.
Then, we propose two message passing neural networks to perform prompt encoding and KG reasoning, respectively.
arXiv Detail & Related papers (2024-10-16T06:47:18Z) - GNN-RAG: Graph Neural Retrieval for Large Language Model Reasoning [21.057810495833063]
We introduce GNN-RAG, a novel method for combining language understanding abilities of LLMs with the reasoning abilities of GNNs in a retrieval-augmented generation (RAG) style.
In our GNN-RAG framework, the GNN acts as a dense subgraph reasoner to extract useful graph information.
Experiments show that GNN-RAG achieves state-of-the-art performance in two widely used KGQA benchmarks.
arXiv Detail & Related papers (2024-05-30T15:14:24Z) - ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained
Language Models for Question Answering over Knowledge Graph [142.42275983201978]
We propose a subgraph-aware self-attention mechanism to imitate the GNN for performing structured reasoning.
We also adopt an adaptation tuning strategy to adapt the model parameters with 20,000 subgraphs with synthesized questions.
Experiments show that ReasoningLM surpasses state-of-the-art models by a large margin, even with fewer updated parameters and less training data.
arXiv Detail & Related papers (2023-12-30T07:18:54Z) - Dynamic Relevance Graph Network for Knowledge-Aware Question Answering [22.06211725256875]
This work investigates the challenge of learning and reasoning for Commonsense Question Answering given an external source of knowledge.
We propose a novel graph neural network architecture, called Dynamic Relevance Graph Network (DRGN)
DRGN operates on a given KG subgraph based on the question and answers entities and uses the relevance scores between the nodes to establish new edges.
arXiv Detail & Related papers (2022-09-20T18:52:05Z) - GreaseLM: Graph REASoning Enhanced Language Models for Question
Answering [159.9645181522436]
GreaseLM is a new model that fuses encoded representations from pretrained LMs and graph neural networks over multiple layers of modality interaction operations.
We show that GreaseLM can more reliably answer questions that require reasoning over both situational constraints and structured knowledge, even outperforming models 8x larger.
arXiv Detail & Related papers (2022-01-21T19:00:05Z) - QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question
Answering [122.84513233992422]
We propose a new model, QA-GNN, which addresses the problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs)
We show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning.
arXiv Detail & Related papers (2021-04-13T17:32:51Z) - Learning Intents behind Interactions with Knowledge Graph for
Recommendation [93.08709357435991]
Knowledge graph (KG) plays an increasingly important role in recommender systems.
Existing GNN-based models fail to identify user-item relation at a fine-grained level of intents.
We propose a new model, Knowledge Graph-based Intent Network (KGIN)
arXiv Detail & Related papers (2021-02-14T03:21:36Z) - Jointly Cross- and Self-Modal Graph Attention Network for Query-Based
Moment Localization [77.21951145754065]
We propose a novel Cross- and Self-Modal Graph Attention Network (CSMGAN) that recasts this task as a process of iterative messages passing over a joint graph.
Our CSMGAN is able to effectively capture high-order interactions between two modalities, thus enabling a further precise localization.
arXiv Detail & Related papers (2020-08-04T08:25:24Z) - Toward Subgraph-Guided Knowledge Graph Question Generation with Graph
Neural Networks [53.58077686470096]
Knowledge graph (KG) question generation (QG) aims to generate natural language questions from KGs and target answers.
In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers.
arXiv Detail & Related papers (2020-04-13T15:43:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.