Time-aware Multiway Adaptive Fusion Network for Temporal Knowledge Graph
Question Answering
- URL: http://arxiv.org/abs/2302.12529v1
- Date: Fri, 24 Feb 2023 09:29:40 GMT
- Title: Time-aware Multiway Adaptive Fusion Network for Temporal Knowledge Graph
Question Answering
- Authors: Yonghao Liu and Di Liang and Fang Fang and Sirui Wang and Wei Wu and
Rui Jiang
- Abstract summary: We propose a novel textbfTime-aware textbfMultiway textbfAdaptive (textbfTMA) fusion network.
For each given question, TMA first extracts the relevant concepts from the KG, and then feeds them into a multiway adaptive module.
This representation can be incorporated with the pre-trained KG embedding to generate the final prediction.
- Score: 10.170042914522778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graphs (KGs) have received increasing attention due to its wide
applications on natural language processing. However, its use case on temporal
question answering (QA) has not been well-explored. Most of existing methods
are developed based on pre-trained language models, which might not be capable
to learn \emph{temporal-specific} presentations of entities in terms of
temporal KGQA task. To alleviate this problem, we propose a novel
\textbf{T}ime-aware \textbf{M}ultiway \textbf{A}daptive (\textbf{TMA}) fusion
network. Inspired by the step-by-step reasoning behavior of humans. For each
given question, TMA first extracts the relevant concepts from the KG, and then
feeds them into a multiway adaptive module to produce a
\emph{temporal-specific} representation of the question. This representation
can be incorporated with the pre-trained KG embedding to generate the final
prediction. Empirical results verify that the proposed model achieves better
performance than the state-of-the-art models in the benchmark dataset. Notably,
the Hits@1 and Hits@10 results of TMA on the CronQuestions dataset's complex
questions are absolutely improved by 24\% and 10\% compared to the
best-performing baseline. Furthermore, we also show that TMA employing an
adaptive fusion mechanism can provide interpretability by analyzing the
proportion of information in question representations.
Related papers
- Long-Span Question-Answering: Automatic Question Generation and QA-System Ranking via Side-by-Side Evaluation [65.16137964758612]
We explore the use of long-context capabilities in large language models to create synthetic reading comprehension data from entire books.
Our objective is to test the capabilities of LLMs to analyze, understand, and reason over problems that require a detailed comprehension of long spans of text.
arXiv Detail & Related papers (2024-05-31T20:15:10Z) - Question Calibration and Multi-Hop Modeling for Temporal Question
Answering [16.668509683238398]
We propose a novel Question and Multi-Hop Modeling (QC-MHM) approach to solve complex multi-hop question answering.
Specifically, we first calibrate the question representation by fusing the question and the time-constrained concepts in KG.
We construct the GNN layer to complete multi-hop message passing. Finally, the question representation is combined with the embedding output by the GNN to generate the final prediction.
arXiv Detail & Related papers (2024-02-20T17:56:24Z) - Fusing Temporal Graphs into Transformers for Time-Sensitive Question
Answering [11.810810214824183]
Answering time-sensitive questions from long documents requires temporal reasoning over the times in questions and documents.
We apply existing temporal information extraction systems to construct temporal graphs of events, times, and temporal relations in questions and documents.
Experimental results show that our proposed approach for fusing temporal graphs into input text substantially enhances the temporal reasoning capabilities of Transformer models.
arXiv Detail & Related papers (2023-10-30T06:12:50Z) - RegaVAE: A Retrieval-Augmented Gaussian Mixture Variational Auto-Encoder
for Language Modeling [79.56442336234221]
We introduce RegaVAE, a retrieval-augmented language model built upon the variational auto-encoder (VAE)
It encodes the text corpus into a latent space, capturing current and future information from both source and target text.
Experimental results on various datasets demonstrate significant improvements in text generation quality and hallucination removal.
arXiv Detail & Related papers (2023-10-16T16:42:01Z) - Jaeger: A Concatenation-Based Multi-Transformer VQA Model [0.13654846342364307]
Document-based Visual Question Answering poses a challenging task between linguistic sense disambiguation and fine-grained multimodal retrieval.
We propose Jaegar, a concatenation-based multi-transformer VQA model.
Our approach has the potential to amplify the performance of these models through concatenation.
arXiv Detail & Related papers (2023-10-11T00:14:40Z) - Deep Bidirectional Language-Knowledge Graph Pretraining [159.9645181522436]
DRAGON is a self-supervised approach to pretraining a deeply joint language-knowledge foundation model from text and KG at scale.
Our model takes pairs of text segments and relevant KG subgraphs as input and bidirectionally fuses information from both modalities.
arXiv Detail & Related papers (2022-10-17T18:02:52Z) - Complex Temporal Question Answering on Knowledge Graphs [22.996399822102575]
This work presents EXAQT, the first end-to-end system for answering complex temporal questions.
It answers natural language questions over knowledge graphs (KGs) in two stages, one geared towards high recall, the other towards precision at top ranks.
We evaluate EXAQT on TimeQuestions, a large dataset of 16k temporal questions compiled from a variety of general purpose KG-QA benchmarks.
arXiv Detail & Related papers (2021-09-18T13:41:43Z) - TAT-QA: A Question Answering Benchmark on a Hybrid of Tabular and
Textual Content in Finance [71.76018597965378]
We build a new large-scale Question Answering dataset containing both Tabular And Textual data, named TAT-QA.
We propose a novel QA model termed TAGOP, which is capable of reasoning over both tables and text.
arXiv Detail & Related papers (2021-05-17T06:12:06Z) - Neural Retrieval for Question Answering with Cross-Attention Supervised
Data Augmentation [14.669454236593447]
Independently computing embeddings for questions and answers results in late fusion of information related to matching questions to their answers.
We present a supervised data mining method using an accurate early fusion model to improve the training of an efficient late fusion retrieval model.
arXiv Detail & Related papers (2020-09-29T07:02:19Z) - Template-Based Question Generation from Retrieved Sentences for Improved
Unsupervised Question Answering [98.48363619128108]
We propose an unsupervised approach to training QA models with generated pseudo-training data.
We show that generating questions for QA training by applying a simple template on a related, retrieved sentence rather than the original context sentence improves downstream QA performance.
arXiv Detail & Related papers (2020-04-24T17:57:45Z) - AMR Parsing via Graph-Sequence Iterative Inference [62.85003739964878]
We propose a new end-to-end model that treats AMR parsing as a series of dual decisions on the input sequence and the incrementally constructed graph.
We show that the answers to these two questions are mutually causalities.
We design a model based on iterative inference that helps achieve better answers in both perspectives, leading to greatly improved parsing accuracy.
arXiv Detail & Related papers (2020-04-12T09:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.