Knowledge-enriched, Type-constrained and Grammar-guided Question
Generation over Knowledge Bases
- URL: http://arxiv.org/abs/2010.03157v3
- Date: Fri, 23 Oct 2020 03:32:38 GMT
- Title: Knowledge-enriched, Type-constrained and Grammar-guided Question
Generation over Knowledge Bases
- Authors: Sheng Bi and Xiya Cheng and Yuan-Fang Li and Yongzhen Wang and Guilin
Qi
- Abstract summary: Question generation over knowledge bases (KBQG) aims at generating natural-language questions about a subgraph, i.e. a set of (connected) triples.
Two main challenges still face the current crop of encoder-decoder-based methods, especially on small subgraphs.
We propose an innovative knowledge-enriched, type-constrained and grammar-guided KBQG model, named KTG.
- Score: 20.412744079015475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Question generation over knowledge bases (KBQG) aims at generating
natural-language questions about a subgraph, i.e. a set of (connected) triples.
Two main challenges still face the current crop of encoder-decoder-based
methods, especially on small subgraphs: (1) low diversity and poor fluency due
to the limited information contained in the subgraphs, and (2) semantic drift
due to the decoder's oblivion of the semantics of the answer entity. We propose
an innovative knowledge-enriched, type-constrained and grammar-guided KBQG
model, named KTG, to addresses the above challenges. In our model, the encoder
is equipped with auxiliary information from the KB, and the decoder is
constrained with word types during QG. Specifically, entity domain and
description, as well as relation hierarchy information are considered to
construct question contexts, while a conditional copy mechanism is incorporated
to modulate question semantics according to current word types. Besides, a
novel reward function featuring grammatical similarity is designed to improve
both generative richness and syntactic correctness via reinforcement learning.
Extensive experiments show that our proposed model outperforms existing methods
by a significant margin on two widely-used benchmark datasets SimpleQuestion
and PathQuestion.
Related papers
- Word and Phrase Features in Graph Convolutional Network for Automatic Question Classification [0.7405975743268344]
We propose a novel approach leveraging graph convolutional networks (GCNs) to better model the inherent structure of questions.
By representing questions as graphs, our method allows GCNs to learn from the interconnected nature of language more effectively.
Our findings demonstrate that GCNs, augmented with phrase-based features, offer a promising solution for more accurate and context-aware question classification.
arXiv Detail & Related papers (2024-09-04T07:13:30Z) - ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models [19.85526116658481]
We introduce ChatKBQA, a novel and simple generate-then-retrieve KBQA framework.
Experimental results show that ChatKBQA achieves new state-of-the-art performance on standard KBQA datasets.
This work can also be regarded as a new paradigm for combining LLMs with knowledge graphs for interpretable and knowledge-required question answering.
arXiv Detail & Related papers (2023-10-13T09:45:14Z) - Towards Verifiable Generation: A Benchmark for Knowledge-aware Language Model Attribution [48.86322922826514]
This paper defines a new task of Knowledge-aware Language Model Attribution (KaLMA)
First, we extend attribution source from unstructured texts to Knowledge Graph (KG), whose rich structures benefit both the attribution performance and working scenarios.
Second, we propose a new Conscious Incompetence" setting considering the incomplete knowledge repository.
Third, we propose a comprehensive automatic evaluation metric encompassing text quality, citation quality, and text citation alignment.
arXiv Detail & Related papers (2023-10-09T11:45:59Z) - TIARA: Multi-grained Retrieval for Robust Question Answering over Large
Knowledge Bases [20.751369684593985]
TIARA outperforms previous SOTA, including those using PLMs or oracle entity annotations, by at least 4.1 and 1.1 F1 points on GrailQA and WebQuestionsSP.
arXiv Detail & Related papers (2022-10-24T02:41:10Z) - Hierarchical Sketch Induction for Paraphrase Generation [79.87892048285819]
We introduce Hierarchical Refinement Quantized Variational Autoencoders (HRQ-VAE), a method for learning decompositions of dense encodings.
We use HRQ-VAE to encode the syntactic form of an input sentence as a path through the hierarchy, allowing us to more easily predict syntactic sketches at test time.
arXiv Detail & Related papers (2022-03-07T15:28:36Z) - GreaseLM: Graph REASoning Enhanced Language Models for Question
Answering [159.9645181522436]
GreaseLM is a new model that fuses encoded representations from pretrained LMs and graph neural networks over multiple layers of modality interaction operations.
We show that GreaseLM can more reliably answer questions that require reasoning over both situational constraints and structured knowledge, even outperforming models 8x larger.
arXiv Detail & Related papers (2022-01-21T19:00:05Z) - Keyphrase Extraction with Dynamic Graph Convolutional Networks and
Diversified Inference [50.768682650658384]
Keyphrase extraction (KE) aims to summarize a set of phrases that accurately express a concept or a topic covered in a given document.
Recent Sequence-to-Sequence (Seq2Seq) based generative framework is widely used in KE task, and it has obtained competitive performance on various benchmarks.
In this paper, we propose to adopt the Dynamic Graph Convolutional Networks (DGCN) to solve the above two problems simultaneously.
arXiv Detail & Related papers (2020-10-24T08:11:23Z) - A Survey on Complex Question Answering over Knowledge Base: Recent
Advances and Challenges [71.4531144086568]
Question Answering (QA) over Knowledge Base (KB) aims to automatically answer natural language questions.
Researchers have shifted their attention from simple questions to complex questions, which require more KB triples and constraint inference.
arXiv Detail & Related papers (2020-07-26T07:13:32Z) - Syn-QG: Syntactic and Shallow Semantic Rules for Question Generation [49.671882751569534]
We develop SynQG, a set of transparent syntactic rules which transform declarative sentences into question-answer pairs.
We utilize PropBank argument descriptions and VerbNet state predicates to incorporate shallow semantic content.
In order to improve syntactic fluency and eliminate grammatically incorrect questions, we employ back-translation over the output of these syntactic rules.
arXiv Detail & Related papers (2020-04-18T19:57:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.