Incremental Knowledge Based Question Answering
- URL: http://arxiv.org/abs/2101.06938v1
- Date: Mon, 18 Jan 2021 09:03:38 GMT
- Title: Incremental Knowledge Based Question Answering
- Authors: Yongqi Li, Wenjie Li, Liqiang Nie
- Abstract summary: We propose a new incremental KBQA learning framework that can progressively expand learning capacity as humans do.
Specifically, it comprises a margin-distilled loss and a collaborative selection method, to overcome the catastrophic forgetting problem.
The comprehensive experiments demonstrate its effectiveness and efficiency when working with the evolving knowledge base.
- Score: 52.041815783025186
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the past years, Knowledge-Based Question Answering (KBQA), which aims to
answer natural language questions using facts in a knowledge base, has been
well developed. Existing approaches often assume a static knowledge base.
However, the knowledge is evolving over time in the real world. If we directly
apply a fine-tuning strategy on an evolving knowledge base, it will suffer from
a serious catastrophic forgetting problem. In this paper, we propose a new
incremental KBQA learning framework that can progressively expand learning
capacity as humans do. Specifically, it comprises a margin-distilled loss and a
collaborative exemplar selection method, to overcome the catastrophic
forgetting problem by taking advantage of knowledge distillation. We reorganize
the SimpleQuestion dataset to evaluate the proposed incremental learning
solution to KBQA. The comprehensive experiments demonstrate its effectiveness
and efficiency when working with the evolving knowledge base.
Related papers
- Stable Knowledge Editing in Large Language Models [68.98582618305679]
We introduce StableKE, a knowledge editing method based on knowledge augmentation rather than knowledge localization.
To overcome the expense of human labeling, StableKE integrates two automated knowledge augmentation strategies.
StableKE surpasses other knowledge editing methods, demonstrating stability both edited knowledge and multi-hop knowledge.
arXiv Detail & Related papers (2024-02-20T14:36:23Z) - Online Continual Knowledge Learning for Language Models [3.654507524092343]
Large Language Models (LLMs) serve as repositories of extensive world knowledge, enabling them to perform tasks such as question-answering and fact-checking.
Online Continual Knowledge Learning (OCKL) aims to manage the dynamic nature of world knowledge in LMs under real-time constraints.
arXiv Detail & Related papers (2023-11-16T07:31:03Z) - Self-Knowledge Guided Retrieval Augmentation for Large Language Models [59.771098292611846]
Large language models (LLMs) have shown superior performance without task-specific fine-tuning.
Retrieval-based methods can offer non-parametric world knowledge and improve the performance on tasks such as question answering.
Self-Knowledge guided Retrieval augmentation (SKR) is a simple yet effective method which can let LLMs refer to the questions they have previously encountered.
arXiv Detail & Related papers (2023-10-08T04:22:33Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - A Two-Stage Approach towards Generalization in Knowledge Base Question
Answering [4.802205743713997]
We introduce a KBQA framework based on a 2-stage architecture that explicitly separates semantic parsing from the knowledge base interaction.
Our approach achieves comparable or state-of-the-art performance for LC-QuAD (DBpedia), WebQSP (Freebase), SimpleQuestions (Wikidata) and MetaQA (Wikimovies-KG)
arXiv Detail & Related papers (2021-11-10T17:45:33Z) - Coarse-to-Careful: Seeking Semantic-related Knowledge for Open-domain
Commonsense Question Answering [12.406729445165857]
It is prevalent to utilize external knowledge to help machine answer questions that need background commonsense.
We propose a semantic-driven knowledge-aware QA framework, which controls the knowledge injection in a coarse-to-careful fashion.
arXiv Detail & Related papers (2021-07-04T10:56:36Z) - Contextualized Knowledge-aware Attentive Neural Network: Enhancing
Answer Selection with Knowledge [77.77684299758494]
We extensively investigate approaches to enhancing the answer selection model with external knowledge from knowledge graph (KG)
First, we present a context-knowledge interaction learning framework, Knowledge-aware Neural Network (KNN), which learns the QA sentence representations by considering a tight interaction with the external knowledge from KG and the textual information.
To handle the diversity and complexity of KG information, we propose a Contextualized Knowledge-aware Attentive Neural Network (CKANN), which improves the knowledge representation learning with structure information via a customized Graph Convolutional Network (GCN) and comprehensively learns context-based and knowledge-based sentence representation via
arXiv Detail & Related papers (2021-04-12T05:52:20Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z) - Question Answering over Knowledge Base using Language Model Embeddings [0.0]
This paper focuses on using a pre-trained language model for the Knowledge Base Question Answering task.
We further fine-tuned these embeddings with a two-way attention mechanism from the knowledge base to the asked question.
Our method is based on a simple Convolutional Neural Network architecture with a Multi-Head Attention mechanism to represent the asked question.
arXiv Detail & Related papers (2020-10-17T22:59:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.