Relation-Aware Language-Graph Transformer for Question Answering
- URL: http://arxiv.org/abs/2212.00975v2
- Date: Tue, 25 Apr 2023 09:02:15 GMT
- Title: Relation-Aware Language-Graph Transformer for Question Answering
- Authors: Jinyoung Park, Hyeong Kyu Choi, Juyeon Ko, Hyeonjin Park, Ji-Hoon Kim,
Jisu Jeong, Kyungmin Kim, Hyunwoo J. Kim
- Abstract summary: We propose Question Answering Transformer (QAT), which is designed to jointly reason over language and graphs with respect to entity relations.
Specifically, QAT constructs Meta-Path tokens, which learn relation-centric embeddings based on diverse structural and semantic relations.
We validate the effectiveness of QAT on commonsense question answering datasets like CommonsenseQA and OpenBookQA, and on a medical question answering dataset, MedQA-USMLE.
- Score: 21.244992938222246
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Question Answering (QA) is a task that entails reasoning over natural
language contexts, and many relevant works augment language models (LMs) with
graph neural networks (GNNs) to encode the Knowledge Graph (KG) information.
However, most existing GNN-based modules for QA do not take advantage of rich
relational information of KGs and depend on limited information interaction
between the LM and the KG. To address these issues, we propose Question
Answering Transformer (QAT), which is designed to jointly reason over language
and graphs with respect to entity relations in a unified manner. Specifically,
QAT constructs Meta-Path tokens, which learn relation-centric embeddings based
on diverse structural and semantic relations. Then, our Relation-Aware
Self-Attention module comprehensively integrates different modalities via the
Cross-Modal Relative Position Bias, which guides information exchange between
relevant entites of different modalities. We validate the effectiveness of QAT
on commonsense question answering datasets like CommonsenseQA and OpenBookQA,
and on a medical question answering dataset, MedQA-USMLE. On all the datasets,
our method achieves state-of-the-art performance. Our code is available at
http://github.com/mlvlab/QAT.
Related papers
- QirK: Question Answering via Intermediate Representation on Knowledge Graphs [6.527176546718545]
We demonstrate QirK, a system for answering natural language questions on Knowledge Graphs (KG)
QirK can answer structurally complex questions that are still beyond the reach of emerging Large Language Models (LLMs)
A short video demonstrating QirK is available at https://youtu.be/6c81BLmOZ0U.
arXiv Detail & Related papers (2024-08-14T12:19:25Z) - FusionMind -- Improving question and answering with external context
fusion [0.0]
We studied the impact of contextual knowledge on the Question-answering (QA) objective using pre-trained language models (LMs) and knowledge graphs (KGs)
We found that incorporating knowledge facts context led to a significant improvement in performance.
This suggests that the integration of contextual knowledge facts may be more impactful for enhancing question answering performance.
arXiv Detail & Related papers (2023-12-31T03:51:31Z) - ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained
Language Models for Question Answering over Knowledge Graph [142.42275983201978]
We propose a subgraph-aware self-attention mechanism to imitate the GNN for performing structured reasoning.
We also adopt an adaptation tuning strategy to adapt the model parameters with 20,000 subgraphs with synthesized questions.
Experiments show that ReasoningLM surpasses state-of-the-art models by a large margin, even with fewer updated parameters and less training data.
arXiv Detail & Related papers (2023-12-30T07:18:54Z) - Relation-Aware Question Answering for Heterogeneous Knowledge Graphs [37.38138785470231]
Existing retrieval-based approaches solve this task by concentrating on the specific relation at different hops.
We claim they fail to utilize information from head-tail entities and the semantic connection between relations to enhance the current relation representation.
Our approach achieves a significant performance gain over the prior state-of-the-art.
arXiv Detail & Related papers (2023-12-19T08:01:48Z) - UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph [89.98762327725112]
Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
arXiv Detail & Related papers (2022-12-02T04:08:09Z) - V-Coder: Adaptive AutoEncoder for Semantic Disclosure in Knowledge
Graphs [4.493174773769076]
We propose a new adaptive AutoEncoder, called V-Coder, to identify relations inherently connecting entities from different domains.
The evaluation on real-world datasets shows that the V-Coder is able to recover links from corrupted data.
arXiv Detail & Related papers (2022-07-22T14:51:46Z) - MGA-VQA: Multi-Granularity Alignment for Visual Question Answering [75.55108621064726]
Learning to answer visual questions is a challenging task since the multi-modal inputs are within two feature spaces.
We propose Multi-Granularity Alignment architecture for Visual Question Answering task (MGA-VQA)
Our model splits alignment into different levels to achieve learning better correlations without needing additional data and annotations.
arXiv Detail & Related papers (2022-01-25T22:30:54Z) - GreaseLM: Graph REASoning Enhanced Language Models for Question
Answering [159.9645181522436]
GreaseLM is a new model that fuses encoded representations from pretrained LMs and graph neural networks over multiple layers of modality interaction operations.
We show that GreaseLM can more reliably answer questions that require reasoning over both situational constraints and structured knowledge, even outperforming models 8x larger.
arXiv Detail & Related papers (2022-01-21T19:00:05Z) - Relation-Guided Pre-Training for Open-Domain Question Answering [67.86958978322188]
We propose a Relation-Guided Pre-Training (RGPT-QA) framework to solve complex open-domain questions.
We show that RGPT-QA achieves 2.2%, 2.4%, and 6.3% absolute improvement in Exact Match accuracy on Natural Questions, TriviaQA, and WebQuestions.
arXiv Detail & Related papers (2021-09-21T17:59:31Z) - QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question
Answering [122.84513233992422]
We propose a new model, QA-GNN, which addresses the problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs)
We show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning.
arXiv Detail & Related papers (2021-04-13T17:32:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.