Exploiting Abstract Meaning Representation for Open-Domain Question
Answering
- URL: http://arxiv.org/abs/2305.17050v1
- Date: Fri, 26 May 2023 16:00:16 GMT
- Title: Exploiting Abstract Meaning Representation for Open-Domain Question
Answering
- Authors: Cunxiang Wang, Zhikun Xu, Qipeng Guo, Xiangkun Hu, Xuefeng Bai, Zheng
Zhang, Yue Zhang
- Abstract summary: We utilize Abstract Meaning Representation (AMR) graphs to assist the model in understanding complex semantic information.
Results from Natural Questions (NQ) and TriviaQA (TQ) demonstrate that our GST method can significantly improve performance.
- Score: 18.027908933572203
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Open-Domain Question Answering (ODQA) task involves retrieving and
subsequently generating answers from fine-grained relevant passages within a
database. Current systems leverage Pretrained Language Models (PLMs) to model
the relationship between questions and passages. However, the diversity in
surface form expressions can hinder the model's ability to capture accurate
correlations, especially within complex contexts. Therefore, we utilize
Abstract Meaning Representation (AMR) graphs to assist the model in
understanding complex semantic information. We introduce a method known as
Graph-as-Token (GST) to incorporate AMRs into PLMs. Results from Natural
Questions (NQ) and TriviaQA (TQ) demonstrate that our GST method can
significantly improve performance, resulting in up to 2.44/3.17 Exact Match
score improvements on NQ/TQ respectively. Furthermore, our method enhances
robustness and outperforms alternative Graph Neural Network (GNN) methods for
integrating AMRs. To the best of our knowledge, we are the first to employ
semantic graphs in ODQA.
Related papers
- Graph-Augmented Relation Extraction Model with LLMs-Generated Support Document [7.0421339410165045]
This study introduces a novel approach to sentence-level relation extraction (RE)
It integrates Graph Neural Networks (GNNs) with Large Language Models (LLMs) to generate contextually enriched support documents.
Our experiments, conducted on the CrossRE dataset, demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-10-30T20:48:34Z) - SPARQL Generation: an analysis on fine-tuning OpenLLaMA for Question
Answering over a Life Science Knowledge Graph [0.0]
We evaluate strategies for fine-tuning the OpenLlama LLM for question answering over life science knowledge graphs.
We propose an end-to-end data augmentation approach for extending a set of existing queries over a given knowledge graph.
We also investigate the role of semantic "clues" in the queries, such as meaningful variable names and inline comments.
arXiv Detail & Related papers (2024-02-07T07:24:01Z) - FusionMind -- Improving question and answering with external context
fusion [0.0]
We studied the impact of contextual knowledge on the Question-answering (QA) objective using pre-trained language models (LMs) and knowledge graphs (KGs)
We found that incorporating knowledge facts context led to a significant improvement in performance.
This suggests that the integration of contextual knowledge facts may be more impactful for enhancing question answering performance.
arXiv Detail & Related papers (2023-12-31T03:51:31Z) - ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained
Language Models for Question Answering over Knowledge Graph [142.42275983201978]
We propose a subgraph-aware self-attention mechanism to imitate the GNN for performing structured reasoning.
We also adopt an adaptation tuning strategy to adapt the model parameters with 20,000 subgraphs with synthesized questions.
Experiments show that ReasoningLM surpasses state-of-the-art models by a large margin, even with fewer updated parameters and less training data.
arXiv Detail & Related papers (2023-12-30T07:18:54Z) - UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph [89.98762327725112]
Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
arXiv Detail & Related papers (2022-12-02T04:08:09Z) - VQA-GNN: Reasoning with Multimodal Knowledge via Graph Neural Networks
for Visual Question Answering [79.22069768972207]
We propose VQA-GNN, a new VQA model that performs bidirectional fusion between unstructured and structured multimodal knowledge to obtain unified knowledge representations.
Specifically, we inter-connect the scene graph and the concept graph through a super node that represents the QA context.
On two challenging VQA tasks, our method outperforms strong baseline VQA methods by 3.2% on VCR and 4.6% on GQA, suggesting its strength in performing concept-level reasoning.
arXiv Detail & Related papers (2022-05-23T17:55:34Z) - Question-Answer Sentence Graph for Joint Modeling Answer Selection [122.29142965960138]
We train and integrate state-of-the-art (SOTA) models for computing scores between question-question, question-answer, and answer-answer pairs.
Online inference is then performed to solve the AS2 task on unseen queries.
arXiv Detail & Related papers (2022-02-16T05:59:53Z) - Dynamic Semantic Graph Construction and Reasoning for Explainable
Multi-hop Science Question Answering [50.546622625151926]
We propose a new framework to exploit more valid facts while obtaining explainability for multi-hop QA.
Our framework contains three new ideas: (a) tt AMR-SG, an AMR-based Semantic Graph, constructed by candidate fact AMRs to uncover any hop relations among question, answer and multiple facts, (b) a novel path-based fact analytics approach exploiting tt AMR-SG to extract active facts from a large fact pool to answer questions, and (c) a fact-level relation modeling leveraging graph convolution network (GCN) to guide the reasoning process.
arXiv Detail & Related papers (2021-05-25T09:14:55Z) - Knowledge Graph Question Answering using Graph-Pattern Isomorphism [0.0]
TeBaQA learns to answer questions based on graph isomorphisms from basic graph patterns of SPARQL queries.
TeBaQA achieves state-of-the-art performance on QALD-8 and delivers comparable results on QALD-9 and LC-QuAD v1.
arXiv Detail & Related papers (2021-03-11T16:03:24Z) - Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text
Generation [56.73834525802723]
Lightweight Dynamic Graph Convolutional Networks (LDGCNs) are proposed.
LDGCNs capture richer non-local interactions by synthesizing higher order information from the input graphs.
We develop two novel parameter saving strategies based on the group graph convolutions and weight tied convolutions to reduce memory usage and model complexity.
arXiv Detail & Related papers (2020-10-09T06:03:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.