NLQxform: A Language Model-based Question to SPARQL Transformer
- URL: http://arxiv.org/abs/2311.07588v1
- Date: Wed, 8 Nov 2023 21:41:45 GMT
- Title: NLQxform: A Language Model-based Question to SPARQL Transformer
- Authors: Ruijie Wang, Zhiruo Zhang, Luca Rossetto, Florian Ruosch, Abraham
Bernstein
- Abstract summary: This paper presents a question-answering (QA) system called NLQxform.
NLQxform allows users to express their complex query intentions in natural language questions.
A transformer-based language model, i.e., BART, is employed to translate questions into standard SPARQL queries.
- Score: 8.698533396991554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, scholarly data has grown dramatically in terms of both scale
and complexity. It becomes increasingly challenging to retrieve information
from scholarly knowledge graphs that include large-scale heterogeneous
relationships, such as authorship, affiliation, and citation, between various
types of entities, e.g., scholars, papers, and organizations. As part of the
Scholarly QALD Challenge, this paper presents a question-answering (QA) system
called NLQxform, which provides an easy-to-use natural language interface to
facilitate accessing scholarly knowledge graphs. NLQxform allows users to
express their complex query intentions in natural language questions. A
transformer-based language model, i.e., BART, is employed to translate
questions into standard SPARQL queries, which can be evaluated to retrieve the
required information. According to the public leaderboard of the Scholarly QALD
Challenge at ISWC 2023 (Task 1: DBLP-QUAD - Knowledge Graph Question Answering
over DBLP), NLQxform achieved an F1 score of 0.85 and ranked first on the QA
task, demonstrating the competitiveness of the system.
Related papers
- Integrating SPARQL and LLMs for Question Answering over Scholarly Data Sources [0.0]
This paper describes a methodology that combines SPARQL queries, divide and conquer algorithms, and BERT-based-case-SQuad2 predictions.
The approach, evaluated with Exact Match and F-score metrics, shows promise for improving QA accuracy and efficiency in scholarly contexts.
arXiv Detail & Related papers (2024-09-11T14:50:28Z) - MST5 -- Multilingual Question Answering over Knowledge Graphs [1.6470999044938401]
Knowledge Graph Question Answering (KGQA) simplifies querying vast amounts of knowledge stored in a graph-based model using natural language.
Existing multilingual KGQA systems face challenges in achieving performance comparable to English systems.
We propose a simplified approach to enhance multilingual KGQA systems by incorporating linguistic context and entity information directly into the processing pipeline of a language model.
arXiv Detail & Related papers (2024-07-08T15:37:51Z) - InfoLossQA: Characterizing and Recovering Information Loss in Text Simplification [60.10193972862099]
This work proposes a framework to characterize and recover simplification-induced information loss in form of question-and-answer pairs.
QA pairs are designed to help readers deepen their knowledge of a text.
arXiv Detail & Related papers (2024-01-29T19:00:01Z) - In-Context Learning for Knowledge Base Question Answering for Unmanned
Systems based on Large Language Models [43.642717344626355]
We focus on the CCKS2023 Competition of Question Answering with Knowledge Graph Inference for Unmanned Systems.
Inspired by the recent success of large language models (LLMs) like ChatGPT and GPT-3 in many QA tasks, we propose a ChatGPT-based Cypher Query Language (CQL) generation framework.
With our ChatGPT-based CQL generation framework, we achieved the second place in the CCKS 2023 Question Answering with Knowledge Graph Inference for Unmanned Systems competition.
arXiv Detail & Related papers (2023-11-06T08:52:11Z) - DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - Open-Set Knowledge-Based Visual Question Answering with Inference Paths [79.55742631375063]
The purpose of Knowledge-Based Visual Question Answering (KB-VQA) is to provide a correct answer to the question with the aid of external knowledge bases.
We propose a new retriever-ranker paradigm of KB-VQA, Graph pATH rankER (GATHER for brevity)
Specifically, it contains graph constructing, pruning, and path-level ranking, which not only retrieves accurate answers but also provides inference paths that explain the reasoning process.
arXiv Detail & Related papers (2023-10-12T09:12:50Z) - BigText-QA: Question Answering over a Large-Scale Hybrid Knowledge Graph [23.739432128095107]
BigText-QA is able to answer questions based on a structured knowledge graph.
Our results demonstrate that BigText-QA outperforms DrQA, a neural-network-based QA system, and achieves competitive results to QUEST, a graph-based unsupervised QA system.
arXiv Detail & Related papers (2022-12-12T09:49:02Z) - A Chinese Multi-type Complex Questions Answering Dataset over Wikidata [45.31495982252219]
Complex Knowledge Base Question Answering is a popular area of research in the past decade.
Recent public datasets have led to encouraging results in this field, but are mostly limited to English.
Few state-of-the-art KBQA models are trained on Wikidata, one of the most popular real-world knowledge bases.
We propose CLC-QuAD, the first large scale complex Chinese semantic parsing dataset over Wikidata to address these challenges.
arXiv Detail & Related papers (2021-11-11T07:39:16Z) - Improving Unsupervised Question Answering via Summarization-Informed
Question Generation [47.96911338198302]
Question Generation (QG) is the task of generating a plausible question for a passage, answer> pair.
We make use of freely available news summary data, transforming declarative sentences into appropriate questions using dependency parsing, named entity recognition and semantic role labeling.
The resulting questions are then combined with the original news articles to train an end-to-end neural QG model.
arXiv Detail & Related papers (2021-09-16T13:08:43Z) - A Survey on Complex Question Answering over Knowledge Base: Recent
Advances and Challenges [71.4531144086568]
Question Answering (QA) over Knowledge Base (KB) aims to automatically answer natural language questions.
Researchers have shifted their attention from simple questions to complex questions, which require more KB triples and constraint inference.
arXiv Detail & Related papers (2020-07-26T07:13:32Z) - Template-Based Question Generation from Retrieved Sentences for Improved
Unsupervised Question Answering [98.48363619128108]
We propose an unsupervised approach to training QA models with generated pseudo-training data.
We show that generating questions for QA training by applying a simple template on a related, retrieved sentence rather than the original context sentence improves downstream QA performance.
arXiv Detail & Related papers (2020-04-24T17:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.