QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question
Answering
- URL: http://arxiv.org/abs/2104.06378v1
- Date: Tue, 13 Apr 2021 17:32:51 GMT
- Title: QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question
Answering
- Authors: Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang and Jure
Leskovec
- Abstract summary: We propose a new model, QA-GNN, which addresses the problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs)
We show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning.
- Score: 122.84513233992422
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The problem of answering questions using knowledge from pre-trained language
models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA
context (question and answer choice), methods need to (i) identify relevant
knowledge from large KGs, and (ii) perform joint reasoning over the QA context
and KG. Here we propose a new model, QA-GNN, which addresses the above
challenges through two key innovations: (i) relevance scoring, where we use LMs
to estimate the importance of KG nodes relative to the given QA context, and
(ii) joint reasoning, where we connect the QA context and KG to form a joint
graph, and mutually update their representations through graph-based message
passing. We evaluate QA-GNN on the CommonsenseQA and OpenBookQA datasets, and
show its improvement over existing LM and LM+KG models, as well as its
capability to perform interpretable and structured reasoning, e.g., correctly
handling negation in questions.
Related papers
- FusionMind -- Improving question and answering with external context
fusion [0.0]
We studied the impact of contextual knowledge on the Question-answering (QA) objective using pre-trained language models (LMs) and knowledge graphs (KGs)
We found that incorporating knowledge facts context led to a significant improvement in performance.
This suggests that the integration of contextual knowledge facts may be more impactful for enhancing question answering performance.
arXiv Detail & Related papers (2023-12-31T03:51:31Z) - GrapeQA: GRaph Augmentation and Pruning to Enhance Question-Answering [19.491275771319074]
Commonsense question-answering (QA) methods combine the power of pre-trained Language Models (LM) with the reasoning provided by Knowledge Graphs (KG)
A typical approach collects nodes relevant to the QA pair from a KG to form a Working Graph followed by reasoning using Graph Neural Networks(GNNs)
We propose GrapeQA with two simple improvements on the WG: (i) Prominent Entities for Graph Augmentation identifies relevant text chunks from the QA pair and augments the WG with corresponding latent representations from the LM, and (ii) Context-Aware Node Pruning removes nodes that are less relevant to the QA
arXiv Detail & Related papers (2023-03-22T05:35:29Z) - FiTs: Fine-grained Two-stage Training for Knowledge-aware Question
Answering [47.495991137191425]
We propose a Fine-grained Two-stage training framework (FiTs) to boost the KAQA system performance.
The first stage aims at aligning representations from the PLM and the KG, thus bridging the modality gaps between them.
The second stage, called knowledge-aware fine-tuning, aims to improve the model's joint reasoning ability.
arXiv Detail & Related papers (2023-02-23T06:25:51Z) - Relation-Aware Language-Graph Transformer for Question Answering [21.244992938222246]
We propose Question Answering Transformer (QAT), which is designed to jointly reason over language and graphs with respect to entity relations.
Specifically, QAT constructs Meta-Path tokens, which learn relation-centric embeddings based on diverse structural and semantic relations.
We validate the effectiveness of QAT on commonsense question answering datasets like CommonsenseQA and OpenBookQA, and on a medical question answering dataset, MedQA-USMLE.
arXiv Detail & Related papers (2022-12-02T05:10:10Z) - VQA-GNN: Reasoning with Multimodal Knowledge via Graph Neural Networks
for Visual Question Answering [79.22069768972207]
We propose VQA-GNN, a new VQA model that performs bidirectional fusion between unstructured and structured multimodal knowledge to obtain unified knowledge representations.
Specifically, we inter-connect the scene graph and the concept graph through a super node that represents the QA context.
On two challenging VQA tasks, our method outperforms strong baseline VQA methods by 3.2% on VCR and 4.6% on GQA, suggesting its strength in performing concept-level reasoning.
arXiv Detail & Related papers (2022-05-23T17:55:34Z) - GreaseLM: Graph REASoning Enhanced Language Models for Question
Answering [159.9645181522436]
GreaseLM is a new model that fuses encoded representations from pretrained LMs and graph neural networks over multiple layers of modality interaction operations.
We show that GreaseLM can more reliably answer questions that require reasoning over both situational constraints and structured knowledge, even outperforming models 8x larger.
arXiv Detail & Related papers (2022-01-21T19:00:05Z) - Improving Unsupervised Question Answering via Summarization-Informed
Question Generation [47.96911338198302]
Question Generation (QG) is the task of generating a plausible question for a passage, answer> pair.
We make use of freely available news summary data, transforming declarative sentences into appropriate questions using dependency parsing, named entity recognition and semantic role labeling.
The resulting questions are then combined with the original news articles to train an end-to-end neural QG model.
arXiv Detail & Related papers (2021-09-16T13:08:43Z) - Toward Subgraph-Guided Knowledge Graph Question Generation with Graph
Neural Networks [53.58077686470096]
Knowledge graph (KG) question generation (QG) aims to generate natural language questions from KGs and target answers.
In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers.
arXiv Detail & Related papers (2020-04-13T15:43:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.