Detecting Temporal Ambiguity in Questions
- URL: http://arxiv.org/abs/2409.17046v1
- Date: Wed, 25 Sep 2024 15:59:58 GMT
- Title: Detecting Temporal Ambiguity in Questions
- Authors: Bhawna Piryani, Abdelrahman Abdallah, Jamshid Mozafari, Adam Jatowt,
- Abstract summary: Temporally ambiguous questions are one of the most common types of such questions.
Our annotations focus on capturing temporal ambiguity to study the task of detecting temporally ambiguous questions.
We propose a novel approach by using diverse search strategies based on disambiguated versions of the questions.
- Score: 16.434748534272014
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting and answering ambiguous questions has been a challenging task in open-domain question answering. Ambiguous questions have different answers depending on their interpretation and can take diverse forms. Temporally ambiguous questions are one of the most common types of such questions. In this paper, we introduce TEMPAMBIQA, a manually annotated temporally ambiguous QA dataset consisting of 8,162 open-domain questions derived from existing datasets. Our annotations focus on capturing temporal ambiguity to study the task of detecting temporally ambiguous questions. We propose a novel approach by using diverse search strategies based on disambiguated versions of the questions. We also introduce and test non-search, competitive baselines for detecting temporal ambiguity using zero-shot and few-shot approaches.
Related papers
- Open Domain Question Answering with Conflicting Contexts [55.739842087655774]
We find that as much as 25% of unambiguous, open domain questions can lead to conflicting contexts when retrieved using Google Search.
We ask our annotators to provide explanations for their selections of correct answers.
arXiv Detail & Related papers (2024-10-16T07:24:28Z) - Question Answering in Natural Language: the Special Case of Temporal
Expressions [0.0]
Our work aims to leverage a popular approach used for general question answering, answer extraction, in order to find answers to temporal questions within a paragraph.
To train our model, we propose a new dataset, inspired by SQuAD, specifically tailored to provide rich temporal information.
Our evaluation shows that a deep learning model trained to perform pattern matching, often used in general question answering, can be adapted to temporal question answering.
arXiv Detail & Related papers (2023-11-23T16:26:24Z) - Answering Ambiguous Questions with a Database of Questions, Answers, and
Revisions [95.92276099234344]
We present a new state-of-the-art for answering ambiguous questions that exploits a database of unambiguous questions generated from Wikipedia.
Our method improves performance by 15% on recall measures and 10% on measures which evaluate disambiguating questions from predicted outputs.
arXiv Detail & Related papers (2023-08-16T20:23:16Z) - Selectively Answering Ambiguous Questions [38.83930394700588]
We find that the most reliable approach to decide when to abstain involves quantifying repetition within sampled model outputs.
Our results suggest that sampling-based confidence scores help calibrate answers to relatively unambiguous questions.
arXiv Detail & Related papers (2023-05-24T01:25:38Z) - Asking Clarification Questions to Handle Ambiguity in Open-Domain QA [25.80369529145732]
We propose to ask a clarification question, where the user's response will help identify the interpretation that best aligns with the user's intention.
We first present CAMBIGNQ, a dataset consisting of 5,654 ambiguous questions.
We then define a pipeline of tasks and design appropriate evaluation metrics.
arXiv Detail & Related papers (2023-05-23T08:20:01Z) - CREPE: Open-Domain Question Answering with False Presuppositions [92.20501870319765]
We introduce CREPE, a QA dataset containing a natural distribution of presupposition failures from online information-seeking forums.
We find that 25% of questions contain false presuppositions, and provide annotations for these presuppositions and their corrections.
We show that adaptations of existing open-domain QA models can find presuppositions moderately well, but struggle when predicting whether a presupposition is factually correct.
arXiv Detail & Related papers (2022-11-30T18:54:49Z) - Answering Ambiguous Questions through Generative Evidence Fusion and
Round-Trip Prediction [46.38201136570501]
We present a model that aggregates and combines evidence from multiple passages to adaptively predict a single answer or a set of question-answer pairs for ambiguous questions.
Our model, named Refuel, achieves a new state-of-the-art performance on the AmbigQA dataset, and shows competitive performance on NQ-Open and TriviaQA.
arXiv Detail & Related papers (2020-11-26T05:48:55Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z) - AmbigQA: Answering Ambiguous Open-domain Questions [99.59747941602684]
We introduce AmbigQA, a new open-domain question answering task which involves finding every plausible answer.
To study this task, we construct AmbigNQ, a dataset covering 14,042 questions from NQ-open.
We find that over half of the questions in NQ-open are ambiguous, with diverse sources of ambiguity such as event and entity references.
arXiv Detail & Related papers (2020-04-22T15:42:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.