Can Generative Pre-trained Language Models Serve as Knowledge Bases for
Closed-book QA?
- URL: http://arxiv.org/abs/2106.01561v1
- Date: Thu, 3 Jun 2021 03:04:06 GMT
- Title: Can Generative Pre-trained Language Models Serve as Knowledge Bases for
Closed-book QA?
- Authors: Cunxiang Wang and Pai Liu and Yue Zhang
- Abstract summary: We construct a new dataset of closed-book QA using SQuAD, and investigate the performance of BART.
Experiments show that it is challenging for BART to remember training facts in high precision, and also challenging to answer closed-book questions even if relevant knowledge is retained.
- Score: 10.301990059068377
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work has investigated the interesting question using pre-trained
language models (PLMs) as knowledge bases for answering open questions.
However, existing work is limited in using small benchmarks with high
test-train overlaps. We construct a new dataset of closed-book QA using SQuAD,
and investigate the performance of BART. Experiments show that it is
challenging for BART to remember training facts in high precision, and also
challenging to answer closed-book questions even if relevant knowledge is
retained. Some promising directions are found, including decoupling the
knowledge memorizing process and the QA finetune process, forcing the model to
recall relevant knowledge when question answering.
Related papers
- CoTKR: Chain-of-Thought Enhanced Knowledge Rewriting for Complex Knowledge Graph Question Answering [33.89497991289916]
We propose a novel rewriting method CoTKR, Chain-of-Thought Enhanced Knowledge Rewriting, for generating reasoning traces and corresponding knowledge in an interleaved manner.
We conduct experiments using various Large Language Models (LLMs) across several Knowledge Graph Question Answering (KGQA) benchmarks.
arXiv Detail & Related papers (2024-09-29T16:08:45Z) - Long-form Question Answering: An Iterative Planning-Retrieval-Generation
Approach [28.849548176802262]
Long-form question answering (LFQA) poses a challenge as it involves generating detailed answers in the form of paragraphs.
We propose an LFQA model with iterative Planning, Retrieval, and Generation.
We find that our model outperforms the state-of-the-art models on various textual and factual metrics for the LFQA task.
arXiv Detail & Related papers (2023-11-15T21:22:27Z) - Open-Set Knowledge-Based Visual Question Answering with Inference Paths [79.55742631375063]
The purpose of Knowledge-Based Visual Question Answering (KB-VQA) is to provide a correct answer to the question with the aid of external knowledge bases.
We propose a new retriever-ranker paradigm of KB-VQA, Graph pATH rankER (GATHER for brevity)
Specifically, it contains graph constructing, pruning, and path-level ranking, which not only retrieves accurate answers but also provides inference paths that explain the reasoning process.
arXiv Detail & Related papers (2023-10-12T09:12:50Z) - RECKONING: Reasoning through Dynamic Knowledge Encoding [51.076603338764706]
We show that language models can answer questions by reasoning over knowledge provided as part of the context.
In these situations, the model fails to distinguish the knowledge that is necessary to answer the question.
We propose teaching the model to reason more robustly by folding the provided contextual knowledge into the model's parameters.
arXiv Detail & Related papers (2023-05-10T17:54:51Z) - Do I have the Knowledge to Answer? Investigating Answerability of
Knowledge Base Questions [25.13991044303459]
We create GrailQAbility, a new benchmark KBQA dataset with unanswerability.
Experimenting with three state-of-the-art KBQA models, we find that all three models suffer a drop in performance.
This underscores the need for further research in making KBQA systems robust to unanswerability.
arXiv Detail & Related papers (2022-12-20T12:00:26Z) - Context Generation Improves Open Domain Question Answering [102.34183939011352]
We propose a two-stage, closed-book QA framework which employs a coarse-to-fine approach to extract relevant knowledge and answer a question.
Our method is able to better exploit the stored knowledge in pretrained LMs without adding extra learnable parameters or needing finetuning.
arXiv Detail & Related papers (2022-10-12T16:00:50Z) - Few-Shot Complex Knowledge Base Question Answering via Meta
Reinforcement Learning [55.08037694027792]
Complex question-answering (CQA) involves answering complex natural-language questions on a knowledge base (KB)
The conventional neural program induction (NPI) approach exhibits uneven performance when the questions have different types.
This paper proposes a meta-reinforcement learning approach to program induction in CQA to tackle the potential distributional bias in questions.
arXiv Detail & Related papers (2020-10-29T18:34:55Z) - Retrieve, Program, Repeat: Complex Knowledge Base Question Answering via
Alternate Meta-learning [56.771557756836906]
We present a novel method that automatically learns a retrieval model alternately with the programmer from weak supervision.
Our system leads to state-of-the-art performance on a large-scale task for complex question answering over knowledge bases.
arXiv Detail & Related papers (2020-10-29T18:28:16Z) - A Survey on Complex Question Answering over Knowledge Base: Recent
Advances and Challenges [71.4531144086568]
Question Answering (QA) over Knowledge Base (KB) aims to automatically answer natural language questions.
Researchers have shifted their attention from simple questions to complex questions, which require more KB triples and constraint inference.
arXiv Detail & Related papers (2020-07-26T07:13:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.