Few-Shot Complex Knowledge Base Question Answering via Meta
Reinforcement Learning
- URL: http://arxiv.org/abs/2010.15877v1
- Date: Thu, 29 Oct 2020 18:34:55 GMT
- Title: Few-Shot Complex Knowledge Base Question Answering via Meta
Reinforcement Learning
- Authors: Yuncheng Hua, Yuan-Fang Li, Gholamreza Haffari, Guilin Qi and Tongtong
Wu
- Abstract summary: Complex question-answering (CQA) involves answering complex natural-language questions on a knowledge base (KB)
The conventional neural program induction (NPI) approach exhibits uneven performance when the questions have different types.
This paper proposes a meta-reinforcement learning approach to program induction in CQA to tackle the potential distributional bias in questions.
- Score: 55.08037694027792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Complex question-answering (CQA) involves answering complex natural-language
questions on a knowledge base (KB). However, the conventional neural program
induction (NPI) approach exhibits uneven performance when the questions have
different types, harboring inherently different characteristics, e.g.,
difficulty level. This paper proposes a meta-reinforcement learning approach to
program induction in CQA to tackle the potential distributional bias in
questions. Our method quickly and effectively adapts the meta-learned
programmer to new questions based on the most similar questions retrieved from
the training data. The meta-learned policy is then used to learn a good
programming policy, utilizing the trial trajectories and their rewards for
similar questions in the support set. Our method achieves state-of-the-art
performance on the CQA dataset (Saha et al., 2018) while using only five trial
trajectories for the top-5 retrieved questions in each support set, and
metatraining on tasks constructed from only 1% of the training set. We have
released our code at https://github.com/DevinJake/MRL-CQA.
Related papers
- Generate then Select: Open-ended Visual Question Answering Guided by
World Knowledge [155.81786738036578]
Open-ended Visual Question Answering (VQA) task requires AI models to jointly reason over visual and natural language inputs.
Pre-trained Language Models (PLM) such as GPT-3 have been applied to the task and shown to be powerful world knowledge sources.
We propose RASO: a new VQA pipeline that deploys a generate-then-select strategy guided by world knowledge.
arXiv Detail & Related papers (2023-05-30T08:34:13Z) - KEPR: Knowledge Enhancement and Plausibility Ranking for Generative
Commonsense Question Answering [11.537283115693432]
We propose a Knowledge Enhancement and Plausibility Ranking approach grounded on the Generate-Then-Rank pipeline architecture.
Specifically, we expand questions in terms of Wiktionary commonsense knowledge of keywords, and reformulate them with normalized patterns.
We develop an ELECTRA-based answer ranking model, where logistic regression is conducted during training, with the aim of approxing different levels of plausibility.
arXiv Detail & Related papers (2023-05-15T04:58:37Z) - Improving Complex Knowledge Base Question Answering via
Question-to-Action and Question-to-Question Alignment [6.646646618666681]
We introduce an alignment-enhanced complex question answering framework, called ALCQA.
We train a question rewriting model to align the question and each action, and utilize a pretrained language model to implicitly align the question and KG artifacts.
We retrieve top-k similar question-answer pairs at the inference stage through question-to-question alignment and propose a novel reward-guided action sequence selection strategy.
arXiv Detail & Related papers (2022-12-26T08:12:41Z) - Modern Question Answering Datasets and Benchmarks: A Survey [5.026863544662493]
Question Answering (QA) is one of the most important natural language processing (NLP) tasks.
It aims using NLP technologies to generate a corresponding answer to a given question based on the massive unstructured corpus.
In this paper, we investigate influential QA datasets that have been released in the era of deep learning.
arXiv Detail & Related papers (2022-06-30T05:53:56Z) - Calculating Question Similarity is Enough:A New Method for KBQA Tasks [8.056701645706404]
This paper proposes a Corpus Generation - Retrieve Method (CGRM) with Pre-training Language Model (PLM) and Knowledge Graph (KG)
Firstly, based on the mT5 model, we designed two new pre-training tasks: knowledge masked language modeling and question generation based on the paragraph.
Secondly, after preprocessing triples of knowledge graph with a series of rules, the kT5 model generates natural language QA pairs based on processed triples.
arXiv Detail & Related papers (2021-11-15T10:31:46Z) - Improving Unsupervised Question Answering via Summarization-Informed
Question Generation [47.96911338198302]
Question Generation (QG) is the task of generating a plausible question for a passage, answer> pair.
We make use of freely available news summary data, transforming declarative sentences into appropriate questions using dependency parsing, named entity recognition and semantic role labeling.
The resulting questions are then combined with the original news articles to train an end-to-end neural QG model.
arXiv Detail & Related papers (2021-09-16T13:08:43Z) - Learning to Ask Conversational Questions by Optimizing Levenshtein
Distance [83.53855889592734]
We introduce a Reinforcement Iterative Sequence Editing (RISE) framework that optimize the minimum Levenshtein distance (MLD) through explicit editing actions.
RISE is able to pay attention to tokens that are related to conversational characteristics.
Experimental results on two benchmark datasets show that RISE significantly outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-06-30T08:44:19Z) - Retrieve, Program, Repeat: Complex Knowledge Base Question Answering via
Alternate Meta-learning [56.771557756836906]
We present a novel method that automatically learns a retrieval model alternately with the programmer from weak supervision.
Our system leads to state-of-the-art performance on a large-scale task for complex question answering over knowledge bases.
arXiv Detail & Related papers (2020-10-29T18:28:16Z) - Unsupervised Multiple Choices Question Answering: Start Learning from
Basic Knowledge [75.7135212362517]
We study the possibility of almost unsupervised Multiple Choices Question Answering (MCQA)
The proposed method is shown to outperform the baseline approaches on RACE and even comparable with some supervised learning approaches on MC500.
arXiv Detail & Related papers (2020-10-21T13:44:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.