ASQ: Automatically Generating Question-Answer Pairs using AMRs
- URL: http://arxiv.org/abs/2105.10023v1
- Date: Thu, 20 May 2021 20:38:05 GMT
- Title: ASQ: Automatically Generating Question-Answer Pairs using AMRs
- Authors: Geetanjali Rakshit and Jeffrey Flanigan
- Abstract summary: We introduce ASQ, a tool to automatically mine questions and answers from a sentence, using its Abstract Meaning Representation (AMR)
A qualitative evaluation of the output generated by ASQ from the AMR 2.0 data shows that the question-answer pairs are natural and valid.
We intend to make this tool and the results publicly available for others to use and build upon.
- Score: 1.0878040851638
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we introduce ASQ, a tool to automatically mine questions and
answers from a sentence, using its Abstract Meaning Representation (AMR).
Previous work has made a case for using question-answer pairs to specify
predicate-argument structure of a sentence using natural language, which does
not require linguistic expertise or training. This has resulted in the creation
of datasets such as QA-SRL and QAMR, for both of which, the question-answer
pair annotations were crowdsourced. Our approach has the same end-goal, but is
automatic, making it faster and cost-effective, without compromising on the
quality and validity of the question-answer pairs thus obtained. A qualitative
evaluation of the output generated by ASQ from the AMR 2.0 data shows that the
question-answer pairs are natural and valid, and demonstrate good coverage of
the content. We run ASQ on the sentences from the QAMR dataset, to observe that
the semantic roles in QAMR are also captured by ASQ.We intend to make this tool
and the results publicly available for others to use and build upon.
Related papers
- SQUARE: Automatic Question Answering Evaluation using Multiple Positive
and Negative References [73.67707138779245]
We propose a new evaluation metric: SQuArE (Sentence-level QUestion AnsweRing Evaluation)
We evaluate SQuArE on both sentence-level extractive (Answer Selection) and generative (GenQA) QA systems.
arXiv Detail & Related papers (2023-09-21T16:51:30Z) - Discourse Analysis via Questions and Answers: Parsing Dependency
Structures of Questions Under Discussion [57.43781399856913]
This work adopts the linguistic framework of Questions Under Discussion (QUD) for discourse analysis.
We characterize relationships between sentences as free-form questions, in contrast to exhaustive fine-grained questions.
We develop the first-of-its-kind QUD that derives a dependency structure of questions over full documents.
arXiv Detail & Related papers (2022-10-12T03:53:12Z) - DUAL: Textless Spoken Question Answering with Speech Discrete Unit
Adaptive Learning [66.71308154398176]
Spoken Question Answering (SQA) has gained research attention and made remarkable progress in recent years.
Existing SQA methods rely on Automatic Speech Recognition (ASR) transcripts, which are time and cost-prohibitive to collect.
This work proposes an ASR transcript-free SQA framework named Discrete Unit Adaptive Learning (DUAL), which leverages unlabeled data for pre-training and is fine-tuned by the SQA downstream task.
arXiv Detail & Related papers (2022-03-09T17:46:22Z) - Improving Unsupervised Question Answering via Summarization-Informed
Question Generation [47.96911338198302]
Question Generation (QG) is the task of generating a plausible question for a passage, answer> pair.
We make use of freely available news summary data, transforming declarative sentences into appropriate questions using dependency parsing, named entity recognition and semantic role labeling.
The resulting questions are then combined with the original news articles to train an end-to-end neural QG model.
arXiv Detail & Related papers (2021-09-16T13:08:43Z) - Generating Self-Contained and Summary-Centric Question Answer Pairs via
Differentiable Reward Imitation Learning [7.2745835227138045]
We propose a model for generating question-answer pairs (QA pairs) with self-contained, summary-centric questions and length-constrained, article-summarizing answers.
This dataset is used to learn a QA pair generation model producing summaries as answers that balance brevity with sufficiency jointly with their corresponding questions.
arXiv Detail & Related papers (2021-09-10T06:34:55Z) - Generating Diverse and Consistent QA pairs from Contexts with
Information-Maximizing Hierarchical Conditional VAEs [62.71505254770827]
We propose a conditional variational autoencoder (HCVAE) for generating QA pairs given unstructured texts as contexts.
Our model obtains impressive performance gains over all baselines on both tasks, using only a fraction of data for training.
arXiv Detail & Related papers (2020-05-28T08:26:06Z) - Question Rewriting for Conversational Question Answering [15.355557454305776]
We introduce a conversational QA architecture that sets the new state of the art on the TREC CAsT 2019 passage retrieval dataset.
We show that the same QR model improves QA performance on the QuAC dataset with respect to answer span extraction.
Our evaluation results indicate that the QR model achieves near human-level performance on both datasets.
arXiv Detail & Related papers (2020-04-30T09:27:43Z) - Break It Down: A Question Understanding Benchmark [79.41678884521801]
We introduce a Question Decomposition Representation Meaning (QDMR) for questions.
QDMR constitutes the ordered list of steps, expressed through natural language, that are necessary for answering a question.
We release the Break dataset, containing over 83K pairs of questions and their QDMRs.
arXiv Detail & Related papers (2020-01-31T11:04:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.