Question Answering in Natural Language: the Special Case of Temporal
Expressions
- URL: http://arxiv.org/abs/2311.14087v1
- Date: Thu, 23 Nov 2023 16:26:24 GMT
- Title: Question Answering in Natural Language: the Special Case of Temporal
Expressions
- Authors: Armand Stricker
- Abstract summary: Our work aims to leverage a popular approach used for general question answering, answer extraction, in order to find answers to temporal questions within a paragraph.
To train our model, we propose a new dataset, inspired by SQuAD, specifically tailored to provide rich temporal information.
Our evaluation shows that a deep learning model trained to perform pattern matching, often used in general question answering, can be adapted to temporal question answering.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although general question answering has been well explored in recent years,
temporal question answering is a task which has not received as much focus. Our
work aims to leverage a popular approach used for general question answering,
answer extraction, in order to find answers to temporal questions within a
paragraph. To train our model, we propose a new dataset, inspired by SQuAD,
specifically tailored to provide rich temporal information. We chose to adapt
the corpus WikiWars, which contains several documents on history's greatest
conflicts. Our evaluation shows that a deep learning model trained to perform
pattern matching, often used in general question answering, can be adapted to
temporal question answering, if we accept to ask questions whose answers must
be directly present within a text.
Related papers
- Detecting Temporal Ambiguity in Questions [16.434748534272014]
Temporally ambiguous questions are one of the most common types of such questions.
Our annotations focus on capturing temporal ambiguity to study the task of detecting temporally ambiguous questions.
We propose a novel approach by using diverse search strategies based on disambiguated versions of the questions.
arXiv Detail & Related papers (2024-09-25T15:59:58Z) - Which questions should I answer? Salience Prediction of Inquisitive Questions [118.097974193544]
We show that highly salient questions are empirically more likely to be answered in the same article.
We further validate our findings by showing that answering salient questions is an indicator of summarization quality in news.
arXiv Detail & Related papers (2024-04-16T21:33:05Z) - Long-form Question Answering: An Iterative Planning-Retrieval-Generation
Approach [28.849548176802262]
Long-form question answering (LFQA) poses a challenge as it involves generating detailed answers in the form of paragraphs.
We propose an LFQA model with iterative Planning, Retrieval, and Generation.
We find that our model outperforms the state-of-the-art models on various textual and factual metrics for the LFQA task.
arXiv Detail & Related papers (2023-11-15T21:22:27Z) - Answering Ambiguous Questions with a Database of Questions, Answers, and
Revisions [95.92276099234344]
We present a new state-of-the-art for answering ambiguous questions that exploits a database of unambiguous questions generated from Wikipedia.
Our method improves performance by 15% on recall measures and 10% on measures which evaluate disambiguating questions from predicted outputs.
arXiv Detail & Related papers (2023-08-16T20:23:16Z) - Concise Answers to Complex Questions: Summarization of Long-form Answers [27.190319030219285]
We conduct a user study on summarized answers generated from state-of-the-art models and our newly proposed extract-and-decontextualize approach.
We find a large proportion of long-form answers can be adequately summarized by at least one system, while complex and implicit answers are challenging to compress.
We observe that decontextualization improves the quality of the extractive summary, exemplifying its potential in the summarization task.
arXiv Detail & Related papers (2023-05-30T17:59:33Z) - CREPE: Open-Domain Question Answering with False Presuppositions [92.20501870319765]
We introduce CREPE, a QA dataset containing a natural distribution of presupposition failures from online information-seeking forums.
We find that 25% of questions contain false presuppositions, and provide annotations for these presuppositions and their corrections.
We show that adaptations of existing open-domain QA models can find presuppositions moderately well, but struggle when predicting whether a presupposition is factually correct.
arXiv Detail & Related papers (2022-11-30T18:54:49Z) - Conversational QA Dataset Generation with Answer Revision [2.5838973036257458]
We introduce a novel framework that extracts question-worthy phrases from a passage and then generates corresponding questions considering previous conversations.
Our framework revises the extracted answers after generating questions so that answers exactly match paired questions.
arXiv Detail & Related papers (2022-09-23T04:05:38Z) - How Do We Answer Complex Questions: Discourse Structure of Long-form
Answers [51.973363804064704]
We study the functional structure of long-form answers collected from three datasets.
Our main goal is to understand how humans organize information to craft complex answers.
Our work can inspire future research on discourse-level modeling and evaluation of long-form QA systems.
arXiv Detail & Related papers (2022-03-21T15:14:10Z) - A Dataset of Information-Seeking Questions and Answers Anchored in
Research Papers [66.11048565324468]
We present a dataset of 5,049 questions over 1,585 Natural Language Processing papers.
Each question is written by an NLP practitioner who read only the title and abstract of the corresponding paper, and the question seeks information present in the full text.
We find that existing models that do well on other QA tasks do not perform well on answering these questions, underperforming humans by at least 27 F1 points when answering them from entire papers.
arXiv Detail & Related papers (2021-05-07T00:12:34Z) - A Graph-guided Multi-round Retrieval Method for Conversational
Open-domain Question Answering [52.041815783025186]
We propose a novel graph-guided retrieval method to model the relations among answers across conversation turns.
We also propose to incorporate the multi-round relevance feedback technique to explore the impact of the retrieval context on current question understanding.
arXiv Detail & Related papers (2021-04-17T04:39:41Z) - Do not let the history haunt you -- Mitigating Compounding Errors in
Conversational Question Answering [17.36904526340775]
We find that compounding errors occur when using previously predicted answers at test time.
We propose a sampling strategy that dynamically selects between target answers and model predictions during training.
arXiv Detail & Related papers (2020-05-12T13:29:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.