AmbigQA: Answering Ambiguous Open-domain Questions
- URL: http://arxiv.org/abs/2004.10645v2
- Date: Mon, 5 Oct 2020 03:28:21 GMT
- Title: AmbigQA: Answering Ambiguous Open-domain Questions
- Authors: Sewon Min, Julian Michael, Hannaneh Hajishirzi, Luke Zettlemoyer
- Abstract summary: We introduce AmbigQA, a new open-domain question answering task which involves finding every plausible answer.
To study this task, we construct AmbigNQ, a dataset covering 14,042 questions from NQ-open.
We find that over half of the questions in NQ-open are ambiguous, with diverse sources of ambiguity such as event and entity references.
- Score: 99.59747941602684
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ambiguity is inherent to open-domain question answering; especially when
exploring new topics, it can be difficult to ask questions that have a single,
unambiguous answer. In this paper, we introduce AmbigQA, a new open-domain
question answering task which involves finding every plausible answer, and then
rewriting the question for each one to resolve the ambiguity. To study this
task, we construct AmbigNQ, a dataset covering 14,042 questions from NQ-open,
an existing open-domain QA benchmark. We find that over half of the questions
in NQ-open are ambiguous, with diverse sources of ambiguity such as event and
entity references. We also present strong baseline models for AmbigQA which we
show benefit from weakly supervised learning that incorporates NQ-open,
strongly suggesting our new task and data will support significant future
research effort. Our data and baselines are available at
https://nlp.cs.washington.edu/ambigqa.
Related papers
- Detecting Temporal Ambiguity in Questions [16.434748534272014]
Temporally ambiguous questions are one of the most common types of such questions.
Our annotations focus on capturing temporal ambiguity to study the task of detecting temporally ambiguous questions.
We propose a novel approach by using diverse search strategies based on disambiguated versions of the questions.
arXiv Detail & Related papers (2024-09-25T15:59:58Z) - AGent: A Novel Pipeline for Automatically Creating Unanswerable
Questions [10.272000561545331]
We propose AGent, a novel pipeline that creates new unanswerable questions by re-matching a question with a context that lacks the necessary information for a correct answer.
In this paper, we demonstrate the usefulness of this AGent pipeline by creating two sets of unanswerable questions from answerable questions in SQuAD and HotpotQA.
arXiv Detail & Related papers (2023-09-10T18:13:11Z) - Answering Ambiguous Questions via Iterative Prompting [84.3426020642704]
In open-domain question answering, due to the ambiguity of questions, multiple plausible answers may exist.
One approach is to directly predict all valid answers, but this can struggle with balancing relevance and diversity.
We present AmbigPrompt to address the imperfections of existing approaches to answering ambiguous questions.
arXiv Detail & Related papers (2023-07-08T04:32:17Z) - IfQA: A Dataset for Open-domain Question Answering under Counterfactual
Presuppositions [54.23087908182134]
We introduce the first large-scale counterfactual open-domain question-answering (QA) benchmarks, named IfQA.
The IfQA dataset contains over 3,800 questions that were annotated by crowdworkers on relevant Wikipedia passages.
The unique challenges posed by the IfQA benchmark will push open-domain QA research on both retrieval and counterfactual reasoning fronts.
arXiv Detail & Related papers (2023-05-23T12:43:19Z) - Asking Clarification Questions to Handle Ambiguity in Open-Domain QA [25.80369529145732]
We propose to ask a clarification question, where the user's response will help identify the interpretation that best aligns with the user's intention.
We first present CAMBIGNQ, a dataset consisting of 5,654 ambiguous questions.
We then define a pipeline of tasks and design appropriate evaluation metrics.
arXiv Detail & Related papers (2023-05-23T08:20:01Z) - Multifaceted Improvements for Conversational Open-Domain Question
Answering [54.913313912927045]
We propose a framework with Multifaceted Improvements for Conversational open-domain Question Answering (MICQA)
Firstly, the proposed KL-divergence based regularization is able to lead to a better question understanding for retrieval and answer reading.
Second, the added post-ranker module can push more relevant passages to the top placements and be selected for reader with a two-aspect constrains.
Third, the well designed curriculum learning strategy effectively narrows the gap between the golden passage settings of training and inference, and encourages the reader to find true answer without the golden passage assistance.
arXiv Detail & Related papers (2022-04-01T07:54:27Z) - Relation-Guided Pre-Training for Open-Domain Question Answering [67.86958978322188]
We propose a Relation-Guided Pre-Training (RGPT-QA) framework to solve complex open-domain questions.
We show that RGPT-QA achieves 2.2%, 2.4%, and 6.3% absolute improvement in Exact Match accuracy on Natural Questions, TriviaQA, and WebQuestions.
arXiv Detail & Related papers (2021-09-21T17:59:31Z) - SituatedQA: Incorporating Extra-Linguistic Contexts into QA [7.495151447459443]
We introduce SituatedQA, an open-retrieval QA dataset where systems must produce the correct answer to a question given the temporal or geographical context.
We find that a significant proportion of information seeking questions have context-dependent answers.
Our study shows that existing models struggle with producing answers that are frequently updated or from uncommon locations.
arXiv Detail & Related papers (2021-09-13T17:53:21Z) - Answering Ambiguous Questions through Generative Evidence Fusion and
Round-Trip Prediction [46.38201136570501]
We present a model that aggregates and combines evidence from multiple passages to adaptively predict a single answer or a set of question-answer pairs for ambiguous questions.
Our model, named Refuel, achieves a new state-of-the-art performance on the AmbigQA dataset, and shows competitive performance on NQ-Open and TriviaQA.
arXiv Detail & Related papers (2020-11-26T05:48:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.