Question Calibration and Multi-Hop Modeling for Temporal Question
Answering
- URL: http://arxiv.org/abs/2402.13188v1
- Date: Tue, 20 Feb 2024 17:56:24 GMT
- Title: Question Calibration and Multi-Hop Modeling for Temporal Question
Answering
- Authors: Chao Xue, Di Liang, Pengfei Wang, Jing Zhang
- Abstract summary: We propose a novel Question and Multi-Hop Modeling (QC-MHM) approach to solve complex multi-hop question answering.
Specifically, we first calibrate the question representation by fusing the question and the time-constrained concepts in KG.
We construct the GNN layer to complete multi-hop message passing. Finally, the question representation is combined with the embedding output by the GNN to generate the final prediction.
- Score: 16.668509683238398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many models that leverage knowledge graphs (KGs) have recently demonstrated
remarkable success in question answering (QA) tasks. In the real world, many
facts contained in KGs are time-constrained thus temporal KGQA has received
increasing attention. Despite the fruitful efforts of previous models in
temporal KGQA, they still have several limitations. (I) They adopt pre-trained
language models (PLMs) to obtain question representations, while PLMs tend to
focus on entity information and ignore entity transfer caused by temporal
constraints, and finally fail to learn specific temporal representations of
entities. (II) They neither emphasize the graph structure between entities nor
explicitly model the multi-hop relationship in the graph, which will make it
difficult to solve complex multi-hop question answering. To alleviate this
problem, we propose a novel Question Calibration and Multi-Hop Modeling
(QC-MHM) approach. Specifically, We first calibrate the question representation
by fusing the question and the time-constrained concepts in KG. Then, we
construct the GNN layer to complete multi-hop message passing. Finally, the
question representation is combined with the embedding output by the GNN to
generate the final prediction. Empirical results verify that the proposed model
achieves better performance than the state-of-the-art models in the benchmark
dataset. Notably, the Hits@1 and Hits@10 results of QC-MHM on the CronQuestions
dataset's complex questions are absolutely improved by 5.1% and 1.2% compared
to the best-performing baseline. Moreover, QC-MHM can generate interpretable
and trustworthy predictions.
Related papers
- Self-Improvement Programming for Temporal Knowledge Graph Question Answering [31.33908040172437]
Temporal Knowledge Graph Question Answering (TKGQA) aims to answer questions with temporal intent over Temporal Knowledge Graphs (TKGs)
Existing end-to-end methods implicitly model the time constraints by learning time-aware embeddings of questions and candidate answers.
We introduce a novel self-improvement Programming method for TKGQA (Prog-TQA)
arXiv Detail & Related papers (2024-04-02T08:14:27Z) - Multi-hop Question Answering under Temporal Knowledge Editing [9.356343796845662]
Multi-hop question answering (MQA) under knowledge editing (KE) has garnered significant attention in the era of large language models.
Existing models for MQA under KE exhibit poor performance when dealing with questions containing explicit temporal contexts.
We propose TEMPoral knowLEdge augmented Multi-hop Question Answering (TEMPLE-MQA) to address this limitation.
arXiv Detail & Related papers (2024-03-30T23:22:51Z) - ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained
Language Models for Question Answering over Knowledge Graph [142.42275983201978]
We propose a subgraph-aware self-attention mechanism to imitate the GNN for performing structured reasoning.
We also adopt an adaptation tuning strategy to adapt the model parameters with 20,000 subgraphs with synthesized questions.
Experiments show that ReasoningLM surpasses state-of-the-art models by a large margin, even with fewer updated parameters and less training data.
arXiv Detail & Related papers (2023-12-30T07:18:54Z) - Event Extraction as Question Generation and Answering [72.04433206754489]
Recent work on Event Extraction has reframed the task as Question Answering (QA)
We propose QGA-EE, which enables a Question Generation (QG) model to generate questions that incorporate rich contextual information instead of using fixed templates.
Experiments show that QGA-EE outperforms all prior single-task-based models on the ACE05 English dataset.
arXiv Detail & Related papers (2023-07-10T01:46:15Z) - An Empirical Comparison of LM-based Question and Answer Generation
Methods [79.31199020420827]
Question and answer generation (QAG) consists of generating a set of question-answer pairs given a context.
In this paper, we establish baselines with three different QAG methodologies that leverage sequence-to-sequence language model (LM) fine-tuning.
Experiments show that an end-to-end QAG model, which is computationally light at both training and inference times, is generally robust and outperforms other more convoluted approaches.
arXiv Detail & Related papers (2023-05-26T14:59:53Z) - Time-aware Multiway Adaptive Fusion Network for Temporal Knowledge Graph
Question Answering [10.170042914522778]
We propose a novel textbfTime-aware textbfMultiway textbfAdaptive (textbfTMA) fusion network.
For each given question, TMA first extracts the relevant concepts from the KG, and then feeds them into a multiway adaptive module.
This representation can be incorporated with the pre-trained KG embedding to generate the final prediction.
arXiv Detail & Related papers (2023-02-24T09:29:40Z) - Realistic Conversational Question Answering with Answer Selection based
on Calibrated Confidence and Uncertainty Measurement [54.55643652781891]
Conversational Question Answering (ConvQA) models aim at answering a question with its relevant paragraph and previous question-answer pairs that occurred during conversation multiple times.
We propose to filter out inaccurate answers in the conversation history based on their estimated confidences and uncertainties from the ConvQA model.
We validate our models, Answer Selection-based realistic Conversation Question Answering, on two standard ConvQA datasets.
arXiv Detail & Related papers (2023-02-10T09:42:07Z) - TwiRGCN: Temporally Weighted Graph Convolution for Question Answering
over Temporal Knowledge Graphs [35.50055476282997]
We show how to generalize relational graph convolutional networks (RGCN) for temporal question answering (QA)
We propose a novel, intuitive and interpretable scheme to modulate the messages passed through a KG edge during convolution.
We evaluate the resulting system, which we call TwiRGCN, on TimeQuestions, a recently released, challenging dataset for complex temporal QA.
arXiv Detail & Related papers (2022-10-12T15:03:49Z) - Complex Temporal Question Answering on Knowledge Graphs [22.996399822102575]
This work presents EXAQT, the first end-to-end system for answering complex temporal questions.
It answers natural language questions over knowledge graphs (KGs) in two stages, one geared towards high recall, the other towards precision at top ranks.
We evaluate EXAQT on TimeQuestions, a large dataset of 16k temporal questions compiled from a variety of general purpose KG-QA benchmarks.
arXiv Detail & Related papers (2021-09-18T13:41:43Z) - QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question
Answering [122.84513233992422]
We propose a new model, QA-GNN, which addresses the problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs)
We show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning.
arXiv Detail & Related papers (2021-04-13T17:32:51Z) - Template-Based Question Generation from Retrieved Sentences for Improved
Unsupervised Question Answering [98.48363619128108]
We propose an unsupervised approach to training QA models with generated pseudo-training data.
We show that generating questions for QA training by applying a simple template on a related, retrieved sentence rather than the original context sentence improves downstream QA performance.
arXiv Detail & Related papers (2020-04-24T17:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.