PeCoQ: A Dataset for Persian Complex Question Answering over Knowledge
Graph
- URL: http://arxiv.org/abs/2106.14167v1
- Date: Sun, 27 Jun 2021 08:21:23 GMT
- Title: PeCoQ: A Dataset for Persian Complex Question Answering over Knowledge
Graph
- Authors: Romina Etezadi, Mehrnoush Shamsfard
- Abstract summary: This paper introduces textitPeCoQ, a dataset for Persian question answering.
This dataset contains 10,000 complex questions and answers extracted from the Persian knowledge graph, FarsBase.
There are different types of complexities in the dataset, such as multi-relation, multi-entity, ordinal, and temporal constraints.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Question answering systems may find the answers to users' questions from
either unstructured texts or structured data such as knowledge graphs.
Answering questions using supervised learning approaches including deep
learning models need large training datasets. In recent years, some datasets
have been presented for the task of Question answering over knowledge graphs,
which is the focus of this paper. Although many datasets in English were
proposed, there have been a few question-answering datasets in Persian. This
paper introduces \textit{PeCoQ}, a dataset for Persian question answering. This
dataset contains 10,000 complex questions and answers extracted from the
Persian knowledge graph, FarsBase. For each question, the SPARQL query and two
paraphrases that were written by linguists are provided as well. There are
different types of complexities in the dataset, such as multi-relation,
multi-entity, ordinal, and temporal constraints. In this paper, we discuss the
dataset's characteristics and describe our methodology for building it.
Related papers
- PCoQA: Persian Conversational Question Answering Dataset [12.07607688189035]
The PCoQA dataset is a resource comprising information-seeking dialogs encompassing a total of 9,026 contextually-driven questions.
PCoQA is designed to present novel challenges compared to previous question answering datasets.
This paper not only presents the comprehensive PCoQA dataset but also reports the performance of various benchmark models.
arXiv Detail & Related papers (2023-12-07T15:29:34Z) - IslamicPCQA: A Dataset for Persian Multi-hop Complex Question Answering
in Islamic Text Resources [0.0]
This article introduces the IslamicPCQA dataset for answering complex questions based on non-structured information sources.
The prepared dataset covers a wide range of Islamic topics and aims to facilitate answering complex Persian questions within this subject matter.
arXiv Detail & Related papers (2023-04-23T14:20:58Z) - Semantic Parsing for Conversational Question Answering over Knowledge
Graphs [63.939700311269156]
We develop a dataset where user questions are annotated with Sparql parses and system answers correspond to execution results thereof.
We present two different semantic parsing approaches and highlight the challenges of the task.
Our dataset and models are released at https://github.com/Edinburgh/SPICE.
arXiv Detail & Related papers (2023-01-28T14:45:11Z) - ConditionalQA: A Complex Reading Comprehension Dataset with Conditional
Answers [93.55268936974971]
We describe a Question Answering dataset that contains complex questions with conditional answers.
We call this dataset ConditionalQA.
We show that ConditionalQA is challenging for many of the existing QA models, especially in selecting answer conditions.
arXiv Detail & Related papers (2021-10-13T17:16:46Z) - A Knowledge-based Approach for Answering Complex Questions in Persian [0.0]
We propose a knowledge-based approach for answering complex questions in Persian.
We handle multi-constraint and multi-hop questions by building their set of possible corresponding logical forms.
The answer to the question is built from the answer to the logical form, extracted from the knowledge graph.
arXiv Detail & Related papers (2021-07-05T14:01:43Z) - A Dataset of Information-Seeking Questions and Answers Anchored in
Research Papers [66.11048565324468]
We present a dataset of 5,049 questions over 1,585 Natural Language Processing papers.
Each question is written by an NLP practitioner who read only the title and abstract of the corresponding paper, and the question seeks information present in the full text.
We find that existing models that do well on other QA tasks do not perform well on answering these questions, underperforming humans by at least 27 F1 points when answering them from entire papers.
arXiv Detail & Related papers (2021-05-07T00:12:34Z) - ComQA:Compositional Question Answering via Hierarchical Graph Neural
Networks [47.12013005600986]
We present a large-scale compositional question answering dataset containing more than 120k human-labeled questions.
To tackle the ComQA problem, we proposed a hierarchical graph neural networks, which represents the document from the low-level word to the high-level sentence.
Our proposed model achieves a significant improvement over previous machine reading comprehension methods and pre-training methods.
arXiv Detail & Related papers (2021-01-16T08:23:27Z) - IIRC: A Dataset of Incomplete Information Reading Comprehension
Questions [53.3193258414806]
We present a dataset, IIRC, with more than 13K questions over paragraphs from English Wikipedia.
The questions were written by crowd workers who did not have access to any of the linked documents.
We follow recent modeling work on various reading comprehension datasets to construct a baseline model for this dataset.
arXiv Detail & Related papers (2020-11-13T20:59:21Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.