Answering Ambiguous Questions via Iterative Prompting
- URL: http://arxiv.org/abs/2307.03897v1
- Date: Sat, 8 Jul 2023 04:32:17 GMT
- Title: Answering Ambiguous Questions via Iterative Prompting
- Authors: Weiwei Sun and Hengyi Cai and Hongshen Chen and Pengjie Ren and Zhumin
Chen and Maarten de Rijke and Zhaochun Ren
- Abstract summary: In open-domain question answering, due to the ambiguity of questions, multiple plausible answers may exist.
One approach is to directly predict all valid answers, but this can struggle with balancing relevance and diversity.
We present AmbigPrompt to address the imperfections of existing approaches to answering ambiguous questions.
- Score: 84.3426020642704
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In open-domain question answering, due to the ambiguity of questions,
multiple plausible answers may exist. To provide feasible answers to an
ambiguous question, one approach is to directly predict all valid answers, but
this can struggle with balancing relevance and diversity. An alternative is to
gather candidate answers and aggregate them, but this method can be
computationally costly and may neglect dependencies among answers. In this
paper, we present AmbigPrompt to address the imperfections of existing
approaches to answering ambiguous questions. Specifically, we integrate an
answering model with a prompting model in an iterative manner. The prompting
model adaptively tracks the reading process and progressively triggers the
answering model to compose distinct and relevant answers. Additionally, we
develop a task-specific post-pretraining approach for both the answering model
and the prompting model, which greatly improves the performance of our
framework. Empirical studies on two commonly-used open benchmarks show that
AmbigPrompt achieves state-of-the-art or competitive results while using less
memory and having a lower inference latency than competing approaches.
Additionally, AmbigPrompt also performs well in low-resource settings. The code
are available at: https://github.com/sunnweiwei/AmbigPrompt.
Related papers
- Successive Prompting for Decomposing Complex Questions [50.00659445976735]
Recent works leverage the capabilities of large language models (LMs) to perform complex question answering in a few-shot setting.
We introduce Successive Prompting'', where we iteratively break down a complex task into a simple task, solve it, and then repeat the process until we get the final solution.
Our best model (with successive prompting) achieves an improvement of 5% absolute F1 on a few-shot version of the DROP dataset.
arXiv Detail & Related papers (2022-12-08T06:03:38Z) - A Semantic-based Method for Unsupervised Commonsense Question Answering [40.18557352036813]
Unsupervised commonsense question answering is appealing since it does not rely on any labeled task data.
We present a novel SEmantic-based Question Answering method (SEQA) for unsupervised commonsense question answering.
arXiv Detail & Related papers (2021-05-31T08:21:52Z) - Answering Ambiguous Questions through Generative Evidence Fusion and
Round-Trip Prediction [46.38201136570501]
We present a model that aggregates and combines evidence from multiple passages to adaptively predict a single answer or a set of question-answer pairs for ambiguous questions.
Our model, named Refuel, achieves a new state-of-the-art performance on the AmbigQA dataset, and shows competitive performance on NQ-Open and TriviaQA.
arXiv Detail & Related papers (2020-11-26T05:48:55Z) - A Wrong Answer or a Wrong Question? An Intricate Relationship between
Question Reformulation and Answer Selection in Conversational Question
Answering [15.355557454305776]
We show that question rewriting (QR) of the conversational context allows to shed more light on this phenomenon.
We present the results of this analysis on the TREC CAsT and QuAC (CANARD) datasets.
arXiv Detail & Related papers (2020-10-13T06:29:51Z) - Tradeoffs in Sentence Selection Techniques for Open-Domain Question
Answering [54.541952928070344]
We describe two groups of models for sentence selection: QA-based approaches, which run a full-fledged QA system to identify answer candidates, and retrieval-based models, which find parts of each passage specifically related to each question.
We show that very lightweight QA models can do well at this task, but retrieval-based models are faster still.
arXiv Detail & Related papers (2020-09-18T23:39:15Z) - Robust Question Answering Through Sub-part Alignment [53.94003466761305]
We model question answering as an alignment problem.
We train our model on SQuAD v1.1 and test it on several adversarial and out-of-domain datasets.
arXiv Detail & Related papers (2020-04-30T09:10:57Z) - ManyModalQA: Modality Disambiguation and QA over Diverse Inputs [73.93607719921945]
We present a new multimodal question answering challenge, ManyModalQA, in which an agent must answer a question by considering three distinct modalities.
We collect our data by scraping Wikipedia and then utilize crowdsourcing to collect question-answer pairs.
arXiv Detail & Related papers (2020-01-22T14:39:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.