Will this Question be Answered? Question Filtering via Answer Model
Distillation for Efficient Question Answering
- URL: http://arxiv.org/abs/2109.07009v1
- Date: Tue, 14 Sep 2021 23:07:49 GMT
- Title: Will this Question be Answered? Question Filtering via Answer Model
Distillation for Efficient Question Answering
- Authors: Siddhant Garg, Alessandro Moschitti
- Abstract summary: We propose a novel approach towards improving the efficiency of Question Answering (QA) systems by filtering out questions that will not be answered by them.
This is based on an interesting new finding: the answer confidence scores of state-of-the-art QA systems can be approximated well by models solely using the input question text.
- Score: 99.66470885217623
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper we propose a novel approach towards improving the efficiency of
Question Answering (QA) systems by filtering out questions that will not be
answered by them. This is based on an interesting new finding: the answer
confidence scores of state-of-the-art QA systems can be approximated well by
models solely using the input question text. This enables preemptive filtering
of questions that are not answered by the system due to their answer confidence
scores being lower than the system threshold. Specifically, we learn
Transformer-based question models by distilling Transformer-based answering
models. Our experiments on three popular QA datasets and one industrial QA
benchmark demonstrate the ability of our question models to approximate the
Precision/Recall curves of the target QA system well. These question models,
when used as filters, can effectively trade off lower computation cost of QA
systems for lower Recall, e.g., reducing computation by ~60%, while only losing
~3-4% of Recall.
Related papers
- SQUARE: Automatic Question Answering Evaluation using Multiple Positive
and Negative References [73.67707138779245]
We propose a new evaluation metric: SQuArE (Sentence-level QUestion AnsweRing Evaluation)
We evaluate SQuArE on both sentence-level extractive (Answer Selection) and generative (GenQA) QA systems.
arXiv Detail & Related papers (2023-09-21T16:51:30Z) - Improving the Question Answering Quality using Answer Candidate
Filtering based on Natural-Language Features [117.44028458220427]
We address the problem of how the Question Answering (QA) quality of a given system can be improved.
Our main contribution is an approach capable of identifying wrong answers provided by a QA system.
In particular, our approach has shown its potential while removing in many cases the majority of incorrect answers.
arXiv Detail & Related papers (2021-12-10T11:09:44Z) - Improving Unsupervised Question Answering via Summarization-Informed
Question Generation [47.96911338198302]
Question Generation (QG) is the task of generating a plausible question for a passage, answer> pair.
We make use of freely available news summary data, transforming declarative sentences into appropriate questions using dependency parsing, named entity recognition and semantic role labeling.
The resulting questions are then combined with the original news articles to train an end-to-end neural QG model.
arXiv Detail & Related papers (2021-09-16T13:08:43Z) - OneStop QAMaker: Extract Question-Answer Pairs from Text in a One-Stop
Approach [11.057028572260064]
We propose a model named OneStop to generate QA pairs from documents in a one-stop approach.
Specifically, questions and their corresponding answer span is extracted simultaneously.
OneStop is much more efficient to be trained and deployed in industrial scenarios since it involves only one model to solve the complex QA generation task.
arXiv Detail & Related papers (2021-02-24T08:45:00Z) - Summary-Oriented Question Generation for Informational Queries [23.72999724312676]
We aim to produce self-explanatory questions that focus on main document topics and are answerable with variable length passages as appropriate.
Our model shows SOTA performance of SQ generation on the NQ dataset (20.1 BLEU-4).
We further apply our model on out-of-domain news articles, evaluating with a QA system due to the lack of gold questions and demonstrate that our model produces better SQs for news articles -- with further confirmation via a human evaluation.
arXiv Detail & Related papers (2020-10-19T17:30:08Z) - Unsupervised Evaluation for Question Answering with Transformers [46.16837670041594]
We investigate the hidden representations of questions, answers, and contexts in transformer-based QA architectures.
We observe a consistent pattern in the answer representations, which we show can be used to automatically evaluate whether or not a predicted answer is correct.
We are able to predict whether or not a model's answer is correct with 91.37% accuracy SQuAD, and 80.7% accuracy on SubjQA.
arXiv Detail & Related papers (2020-10-07T07:03:30Z) - Selective Question Answering under Domain Shift [90.021577320085]
Abstention policies based solely on the model's softmax probabilities fare poorly, since models are overconfident on out-of-domain inputs.
We train a calibrator to identify inputs on which the QA model errs, and abstain when it predicts an error is likely.
Our method answers 56% of questions while maintaining 80% accuracy; in contrast, directly using the model's probabilities only answers 48% at 80% accuracy.
arXiv Detail & Related papers (2020-06-16T19:13:21Z) - Harvesting and Refining Question-Answer Pairs for Unsupervised QA [95.9105154311491]
We introduce two approaches to improve unsupervised Question Answering (QA)
First, we harvest lexically and syntactically divergent questions from Wikipedia to automatically construct a corpus of question-answer pairs (named as RefQA)
Second, we take advantage of the QA model to extract more appropriate answers, which iteratively refines data over RefQA.
arXiv Detail & Related papers (2020-05-06T15:56:06Z) - Unsupervised Question Decomposition for Question Answering [102.56966847404287]
We propose an algorithm for One-to-N Unsupervised Sequence Sequence (ONUS) that learns to map one hard, multi-hop question to many simpler, single-hop sub-questions.
We show large QA improvements on HotpotQA over a strong baseline on the original, out-of-domain, and multi-hop dev sets.
arXiv Detail & Related papers (2020-02-22T19:40:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.