Geographic Question Answering: Challenges, Uniqueness, Classification,
and Future Directions
- URL: http://arxiv.org/abs/2105.09392v1
- Date: Wed, 19 May 2021 20:47:36 GMT
- Title: Geographic Question Answering: Challenges, Uniqueness, Classification,
and Future Directions
- Authors: Gengchen Mai, Krzysztof Janowicz, Rui Zhu, Ling Cai, and Ni Lao
- Abstract summary: Question Answering (QA) aims at generating answers to questions phrased in natural language.
QA systems are still struggling to answer questions which involve geographic entities or concepts.
- Score: 10.21626173269271
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As an important part of Artificial Intelligence (AI), Question Answering (QA)
aims at generating answers to questions phrased in natural language. While
there has been substantial progress in open-domain question answering, QA
systems are still struggling to answer questions which involve geographic
entities or concepts and that require spatial operations. In this paper, we
discuss the problem of geographic question answering (GeoQA). We first
investigate the reasons why geographic questions are difficult to answer by
analyzing challenges of geographic questions. We discuss the uniqueness of
geographic questions compared to general QA. Then we review existing work on
GeoQA and classify them by the types of questions they can address. Based on
this survey, we provide a generic classification framework for geographic
questions. Finally, we conclude our work by pointing out unique future research
directions for GeoQA.
Related papers
- MapQA: Open-domain Geospatial Question Answering on Map Data [30.998432707821127]
MapQA is a novel dataset that provides question-answer pairs and geometries of geo-entities referenced in the questions.
It consists of 3,154 QA pairs spanning nine question types that require geospatial reasoning, such as neighborhood inference and geo-entity type identification.
arXiv Detail & Related papers (2025-03-10T21:37:22Z) - Temporal Knowledge Graph Question Answering: A Survey [39.40384139630724]
Temporal Knowledge Graph Question Answering (TKGQA) is an emerging task to answer temporal questions.
This paper provides a thorough survey from two perspectives: the taxonomy of temporal questions and the methodological categorization for TKGQA.
arXiv Detail & Related papers (2024-06-20T10:51:06Z) - Answering Ambiguous Questions with a Database of Questions, Answers, and
Revisions [95.92276099234344]
We present a new state-of-the-art for answering ambiguous questions that exploits a database of unambiguous questions generated from Wikipedia.
Our method improves performance by 15% on recall measures and 10% on measures which evaluate disambiguating questions from predicted outputs.
arXiv Detail & Related papers (2023-08-16T20:23:16Z) - IfQA: A Dataset for Open-domain Question Answering under Counterfactual
Presuppositions [54.23087908182134]
We introduce the first large-scale counterfactual open-domain question-answering (QA) benchmarks, named IfQA.
The IfQA dataset contains over 3,800 questions that were annotated by crowdworkers on relevant Wikipedia passages.
The unique challenges posed by the IfQA benchmark will push open-domain QA research on both retrieval and counterfactual reasoning fronts.
arXiv Detail & Related papers (2023-05-23T12:43:19Z) - GeoGLUE: A GeoGraphic Language Understanding Evaluation Benchmark [56.08664336835741]
We propose a GeoGraphic Language Understanding Evaluation benchmark, named GeoGLUE.
We collect data from open-released geographic resources and introduce six natural language understanding tasks.
We pro vide evaluation experiments and analysis of general baselines, indicating the effectiveness and significance of the GeoGLUE benchmark.
arXiv Detail & Related papers (2023-05-11T03:21:56Z) - CREPE: Open-Domain Question Answering with False Presuppositions [92.20501870319765]
We introduce CREPE, a QA dataset containing a natural distribution of presupposition failures from online information-seeking forums.
We find that 25% of questions contain false presuppositions, and provide annotations for these presuppositions and their corrections.
We show that adaptations of existing open-domain QA models can find presuppositions moderately well, but struggle when predicting whether a presupposition is factually correct.
arXiv Detail & Related papers (2022-11-30T18:54:49Z) - Chart Question Answering: State of the Art and Future Directions [0.0]
Chart Question Answering (CQA) systems typically take a chart and a natural language question as input and automatically generate the answer.
We systematically review the current state-of-the-art research focusing on the problem of chart question answering.
arXiv Detail & Related papers (2022-05-08T22:54:28Z) - SituatedQA: Incorporating Extra-Linguistic Contexts into QA [7.495151447459443]
We introduce SituatedQA, an open-retrieval QA dataset where systems must produce the correct answer to a question given the temporal or geographical context.
We find that a significant proportion of information seeking questions have context-dependent answers.
Our study shows that existing models struggle with producing answers that are frequently updated or from uncommon locations.
arXiv Detail & Related papers (2021-09-13T17:53:21Z) - GeoQA: A Geometric Question Answering Benchmark Towards Multimodal
Numerical Reasoning [172.36214872466707]
We focus on solving geometric problems, which requires a comprehensive understanding of textual descriptions, visual diagrams, and theorem knowledge.
We propose a Geometric Question Answering dataset GeoQA, containing 5,010 geometric problems with corresponding annotated programs.
arXiv Detail & Related papers (2021-05-30T12:34:17Z) - Joint Spatio-Textual Reasoning for Answering Tourism Questions [19.214280482194503]
Our Points is to answer real-world questions that seek goal-of-Interest (POI)
We develop the first jointtextual-reasoning model which combines geo-spatial knowledge with information in textual corpora to answer questions.
We report substantial improvements over existing models with-out joint-textual reasoning.
arXiv Detail & Related papers (2020-09-28T20:35:00Z) - AmbigQA: Answering Ambiguous Open-domain Questions [99.59747941602684]
We introduce AmbigQA, a new open-domain question answering task which involves finding every plausible answer.
To study this task, we construct AmbigNQ, a dataset covering 14,042 questions from NQ-open.
We find that over half of the questions in NQ-open are ambiguous, with diverse sources of ambiguity such as event and entity references.
arXiv Detail & Related papers (2020-04-22T15:42:13Z) - Understanding Knowledge Gaps in Visual Question Answering: Implications
for Gap Identification and Testing [20.117014315684287]
We use a taxonomy of Knowledge Gaps (KGs) to tag questions with one or more types of KGs.
We then examine the skew in the distribution of questions for each KG.
These new questions can be added to existing VQA datasets to increase the diversity of questions and reduce the skew.
arXiv Detail & Related papers (2020-04-08T00:27:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.