ClarQ: A large-scale and diverse dataset for Clarification Question
Generation
- URL: http://arxiv.org/abs/2006.05986v2
- Date: Thu, 11 Jun 2020 17:18:39 GMT
- Title: ClarQ: A large-scale and diverse dataset for Clarification Question
Generation
- Authors: Vaibhav Kumar and Alan W. black
- Abstract summary: We devise a novel bootstrapping framework that assists in the creation of a diverse, large-scale dataset of clarification questions based on postcomments extracted from stackexchange.
We quantitatively demonstrate the utility of the newly created dataset by applying it to the downstream task of question-answering.
We release this dataset in order to foster research into the field of clarification question generation with the larger goal of enhancing dialog and question answering systems.
- Score: 67.1162903046619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Question answering and conversational systems are often baffled and need help
clarifying certain ambiguities. However, limitations of existing datasets
hinder the development of large-scale models capable of generating and
utilising clarification questions. In order to overcome these limitations, we
devise a novel bootstrapping framework (based on self-supervision) that assists
in the creation of a diverse, large-scale dataset of clarification questions
based on post-comment tuples extracted from stackexchange. The framework
utilises a neural network based architecture for classifying clarification
questions. It is a two-step method where the first aims to increase the
precision of the classifier and second aims to increase its recall. We
quantitatively demonstrate the utility of the newly created dataset by applying
it to the downstream task of question-answering. The final dataset, ClarQ,
consists of ~2M examples distributed across 173 domains of stackexchange. We
release this dataset in order to foster research into the field of
clarification question generation with the larger goal of enhancing dialog and
question answering systems.
Related papers
- Improving Retrieval Augmented Open-Domain Question-Answering with Vectorized Contexts [83.57864140378035]
This paper proposes a method to cover longer contexts in Open-Domain Question-Answering tasks.
It leverages a small encoder language model that effectively encodes contexts, and the encoding applies cross-attention with origin inputs.
After fine-tuning, there is improved performance across two held-in datasets, four held-out datasets, and also in two In Context Learning settings.
arXiv Detail & Related papers (2024-04-02T15:10:11Z) - A Lightweight Method to Generate Unanswerable Questions in English [18.323248259867356]
We examine a simpler data augmentation method for unanswerable question generation in English.
We perform antonym and entity swaps on answerable questions.
Compared to the prior state-of-the-art, data generated with our training-free and lightweight strategy results in better models.
arXiv Detail & Related papers (2023-10-30T10:14:52Z) - Improving Question Generation with Multi-level Content Planning [70.37285816596527]
This paper addresses the problem of generating questions from a given context and an answer, specifically focusing on questions that require multi-hop reasoning across an extended context.
We propose MultiFactor, a novel QG framework based on multi-level content planning. Specifically, MultiFactor includes two components: FA-model, which simultaneously selects key phrases and generates full answers, and Q-model which takes the generated full answer as an additional input to generate questions.
arXiv Detail & Related papers (2023-10-20T13:57:01Z) - UNK-VQA: A Dataset and a Probe into the Abstention Ability of Multi-modal Large Models [55.22048505787125]
This paper contributes a comprehensive dataset, called UNK-VQA.
We first augment the existing data via deliberate perturbations on either the image or question.
We then extensively evaluate the zero- and few-shot performance of several emerging multi-modal large models.
arXiv Detail & Related papers (2023-10-17T02:38:09Z) - ADMUS: A Progressive Question Answering Framework Adaptable to Multiple
Knowledge Sources [9.484792817869671]
We present ADMUS, a progressive knowledge base question answering framework designed to accommodate a wide variety of datasets.
Our framework supports the seamless integration of new datasets with minimal effort, only requiring creating a dataset-related micro-service at a negligible cost.
arXiv Detail & Related papers (2023-08-09T08:46:39Z) - Controllable Open-ended Question Generation with A New Question Type
Ontology [6.017006996402699]
We investigate the less-explored task of generating open-ended questions that are typically answered by multiple sentences.
We first define a new question type ontology which differentiates the nuanced nature of questions better than widely used question words.
We then propose a novel question type-aware question generation framework, augmented by a semantic graph representation.
arXiv Detail & Related papers (2021-07-01T00:02:03Z) - Tell Me How to Ask Again: Question Data Augmentation with Controllable
Rewriting in Continuous Space [94.8320535537798]
Controllable Rewriting based Question Data Augmentation (CRQDA) for machine reading comprehension (MRC), question generation, and question-answering natural language inference tasks.
We treat the question data augmentation task as a constrained question rewriting problem to generate context-relevant, high-quality, and diverse question data samples.
arXiv Detail & Related papers (2020-10-04T03:13:46Z) - Robust Question Answering Through Sub-part Alignment [53.94003466761305]
We model question answering as an alignment problem.
We train our model on SQuAD v1.1 and test it on several adversarial and out-of-domain datasets.
arXiv Detail & Related papers (2020-04-30T09:10:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.