History Semantic Graph Enhanced Conversational KBQA with Temporal
Information Modeling
- URL: http://arxiv.org/abs/2306.06872v1
- Date: Mon, 12 Jun 2023 05:10:58 GMT
- Title: History Semantic Graph Enhanced Conversational KBQA with Temporal
Information Modeling
- Authors: Hao Sun, Yang Li, Liwei Deng, Bowen Li, Binyuan Hui, Binhua Li, Yunshi
Lan, Yan Zhang, Yongbin Li
- Abstract summary: We propose a History Semantic Graph Enhanced KBQA model (HSGE) that is able to effectively model long-range semantic dependencies in conversation history.
We evaluate HSGE on a widely used benchmark dataset for complex sequential question answering.
- Score: 28.27368343751272
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Context information modeling is an important task in conversational KBQA.
However, existing methods usually assume the independence of utterances and
model them in isolation. In this paper, we propose a History Semantic Graph
Enhanced KBQA model (HSGE) that is able to effectively model long-range
semantic dependencies in conversation history while maintaining low
computational cost. The framework incorporates a context-aware encoder, which
employs a dynamic memory decay mechanism and models context at different levels
of granularity. We evaluate HSGE on a widely used benchmark dataset for complex
sequential question answering. Experimental results demonstrate that it
outperforms existing baselines averaged on all question types.
Related papers
- Michelangelo: Long Context Evaluations Beyond Haystacks via Latent Structure Queries [54.325172923155414]
We introduce Michelangelo: a minimal, synthetic, and unleaked long-context reasoning evaluation for large language models.
This evaluation is derived via a novel, unifying framework for evaluations over arbitrarily long contexts.
arXiv Detail & Related papers (2024-09-19T10:38:01Z) - A Controlled Study on Long Context Extension and Generalization in LLMs [85.4758128256142]
Broad textual understanding and in-context learning require language models that utilize full document contexts.
Due to the implementation challenges associated with directly training long-context models, many methods have been proposed for extending models to handle long contexts.
We implement a controlled protocol for extension methods with a standardized evaluation, utilizing consistent base models and extension data.
arXiv Detail & Related papers (2024-09-18T17:53:17Z) - Long-Span Question-Answering: Automatic Question Generation and QA-System Ranking via Side-by-Side Evaluation [65.16137964758612]
We explore the use of long-context capabilities in large language models to create synthetic reading comprehension data from entire books.
Our objective is to test the capabilities of LLMs to analyze, understand, and reason over problems that require a detailed comprehension of long spans of text.
arXiv Detail & Related papers (2024-05-31T20:15:10Z) - Consistency Training by Synthetic Question Generation for Conversational Question Answering [14.211024633768986]
We augment historical information with synthetic questions to make the reasoning robust to irrelevant history.
This is the first instance of research using question generation as a form of data augmentation to model conversational QA settings.
arXiv Detail & Related papers (2024-04-17T06:49:14Z) - Evaluating Large Language Models in Semantic Parsing for Conversational
Question Answering over Knowledge Graphs [6.869834883252353]
This paper evaluates the performance of large language models that have not been explicitly pre-trained on this task.
Our results demonstrate that large language models are capable of generating graph queries from dialogues.
arXiv Detail & Related papers (2024-01-03T12:28:33Z) - An Empirical Comparison of LM-based Question and Answer Generation
Methods [79.31199020420827]
Question and answer generation (QAG) consists of generating a set of question-answer pairs given a context.
In this paper, we establish baselines with three different QAG methodologies that leverage sequence-to-sequence language model (LM) fine-tuning.
Experiments show that an end-to-end QAG model, which is computationally light at both training and inference times, is generally robust and outperforms other more convoluted approaches.
arXiv Detail & Related papers (2023-05-26T14:59:53Z) - Conversational Semantic Parsing using Dynamic Context Graphs [68.72121830563906]
We consider the task of conversational semantic parsing over general purpose knowledge graphs (KGs) with millions of entities, and thousands of relation-types.
We focus on models which are capable of interactively mapping user utterances into executable logical forms.
arXiv Detail & Related papers (2023-05-04T16:04:41Z) - Augmenting Pre-trained Language Models with QA-Memory for Open-Domain
Question Answering [38.071375112873675]
We propose a question-answer augmented encoder-decoder model and accompanying pretraining strategy.
This yields an end-to-end system that outperforms prior QA retrieval methods on single-hop QA tasks.
arXiv Detail & Related papers (2022-04-10T02:33:00Z) - SGD-QA: Fast Schema-Guided Dialogue State Tracking for Unseen Services [15.21976869687864]
We propose SGD-QA, a model for schema-guided dialogue state tracking based on a question answering approach.
The proposed multi-pass model shares a single encoder between the domain information and dialogue utterance.
The model improves performance on unseen services by at least 1.6x compared to single-pass baseline models.
arXiv Detail & Related papers (2021-05-17T17:54:32Z) - Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent
Semantic Parsing [52.24507547010127]
Cross-domain context-dependent semantic parsing is a new focus of research.
We present a dynamic graph framework that effectively modelling contextual utterances, tokens, database schemas, and their complicated interaction as the conversation proceeds.
The proposed framework outperforms all existing models by large margins, achieving new state-of-the-art performance on two large-scale benchmarks.
arXiv Detail & Related papers (2021-01-05T18:11:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.