BigText-QA: Question Answering over a Large-Scale Hybrid Knowledge Graph
- URL: http://arxiv.org/abs/2212.05798v2
- Date: Thu, 7 Sep 2023 12:22:43 GMT
- Title: BigText-QA: Question Answering over a Large-Scale Hybrid Knowledge Graph
- Authors: Jingjing Xu, Maria Biryukov, Martin Theobald, Vinu Ellampallil
Venugopal
- Abstract summary: BigText-QA is able to answer questions based on a structured knowledge graph.
Our results demonstrate that BigText-QA outperforms DrQA, a neural-network-based QA system, and achieves competitive results to QUEST, a graph-based unsupervised QA system.
- Score: 23.739432128095107
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Answering complex questions over textual resources remains a challenge,
particularly when dealing with nuanced relationships between multiple entities
expressed within natural-language sentences. To this end, curated knowledge
bases (KBs) like YAGO, DBpedia, Freebase, and Wikidata have been widely used
and gained great acceptance for question-answering (QA) applications in the
past decade. While these KBs offer a structured knowledge representation, they
lack the contextual diversity found in natural-language sources. To address
this limitation, BigText-QA introduces an integrated QA approach, which is able
to answer questions based on a more redundant form of a knowledge graph (KG)
that organizes both structured and unstructured (i.e., "hybrid") knowledge in a
unified graphical representation. Thereby, BigText-QA is able to combine the
best of both worlds$\unicode{x2013}$a canonical set of named entities, mapped
to a structured background KB (such as YAGO or Wikidata), as well as an open
set of textual clauses providing highly diversified relational paraphrases with
rich context information. Our experimental results demonstrate that BigText-QA
outperforms DrQA, a neural-network-based QA system, and achieves competitive
results to QUEST, a graph-based unsupervised QA system.
Related papers
- DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models [19.85526116658481]
We introduce ChatKBQA, a novel and simple generate-then-retrieve KBQA framework.
Experimental results show that ChatKBQA achieves new state-of-the-art performance on standard KBQA datasets.
This work can also be regarded as a new paradigm for combining LLMs with knowledge graphs for interpretable and knowledge-required question answering.
arXiv Detail & Related papers (2023-10-13T09:45:14Z) - Semantic Parsing for Conversational Question Answering over Knowledge
Graphs [63.939700311269156]
We develop a dataset where user questions are annotated with Sparql parses and system answers correspond to execution results thereof.
We present two different semantic parsing approaches and highlight the challenges of the task.
Our dataset and models are released at https://github.com/Edinburgh/SPICE.
arXiv Detail & Related papers (2023-01-28T14:45:11Z) - Knowledge Base Question Answering: A Semantic Parsing Perspective [15.1388686976988]
Research on question answering over knowledge bases (KBQA) has comparatively been progressing slowly.
We identify and attribute this to two unique challenges of KBQA, schema-level complexity and fact-level complexity.
We argue that we can still take much inspiration from the literature of semantic parsing.
arXiv Detail & Related papers (2022-09-12T02:56:29Z) - QALD-9-plus: A Multilingual Dataset for Question Answering over DBpedia
and Wikidata Translated by Native Speakers [68.9964449363406]
We extend one of the most popular KGQA benchmarks - QALD-9 by introducing high-quality questions' translations to 8 languages.
Five of the languages - Armenian, Ukrainian, Lithuanian, Bashkir and Belarusian - to our best knowledge were never considered in KGQA research community before.
arXiv Detail & Related papers (2022-01-31T22:19:55Z) - Open Domain Question Answering over Virtual Documents: A Unified
Approach for Data and Text [62.489652395307914]
We use the data-to-text method as a means for encoding structured knowledge for knowledge-intensive applications, i.e. open-domain question answering (QA)
Specifically, we propose a verbalizer-retriever-reader framework for open-domain QA over data and text where verbalized tables from Wikipedia and triples from Wikidata are used as augmented knowledge sources.
We show that our Unified Data and Text QA, UDT-QA, can effectively benefit from the expanded knowledge index, leading to large gains over text-only baselines.
arXiv Detail & Related papers (2021-10-16T00:11:21Z) - SYGMA: System for Generalizable Modular Question Answering OverKnowledge
Bases [57.89642289610301]
We present SYGMA, a modular approach facilitating general-izability across multiple knowledge bases and multiple rea-soning types.
We demonstrate effectiveness of our system by evaluating on datasets belonging to two distinct knowledge bases,DBpedia and Wikidata.
arXiv Detail & Related papers (2021-09-28T01:57:56Z) - UNIQORN: Unified Question Answering over RDF Knowledge Graphs and Natural Language Text [20.1784368017206]
Question answering over RDF data like knowledge graphs has been greatly advanced.
IR and NLP communities have addressed QA over text, but such systems barely utilize semantic data and knowledge.
This paper presents a method for complex questions that can seamlessly operate over a mixture of RDF datasets and text corpora.
arXiv Detail & Related papers (2021-08-19T10:50:52Z) - Efficient Contextualization using Top-k Operators for Question Answering
over Knowledge Graphs [24.520002698010856]
This work presents ECQA, an efficient method that prunes irrelevant parts of the search space using KB-aware signals.
Experiments with two recent QA benchmarks demonstrate the superiority of ECQA over state-of-the-art baselines with respect to answer presence, size of the search space, and runtimes.
arXiv Detail & Related papers (2021-08-19T10:06:14Z) - Open Question Answering over Tables and Text [55.8412170633547]
In open question answering (QA), the answer to a question is produced by retrieving and then analyzing documents that might contain answers to the question.
Most open QA systems have considered only retrieving information from unstructured text.
We present a new large-scale dataset Open Table-and-Text Question Answering (OTT-QA) to evaluate performance on this task.
arXiv Detail & Related papers (2020-10-20T16:48:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.