Unsupervised Multiple Choices Question Answering: Start Learning from
Basic Knowledge
- URL: http://arxiv.org/abs/2010.11003v2
- Date: Mon, 1 Nov 2021 08:40:08 GMT
- Title: Unsupervised Multiple Choices Question Answering: Start Learning from
Basic Knowledge
- Authors: Chi-Liang Liu and Hung-yi Lee
- Abstract summary: We study the possibility of almost unsupervised Multiple Choices Question Answering (MCQA)
The proposed method is shown to outperform the baseline approaches on RACE and even comparable with some supervised learning approaches on MC500.
- Score: 75.7135212362517
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the possibility of almost unsupervised Multiple
Choices Question Answering (MCQA). Starting from very basic knowledge, MCQA
model knows that some choices have higher probabilities of being correct than
the others. The information, though very noisy, guides the training of an MCQA
model. The proposed method is shown to outperform the baseline approaches on
RACE and even comparable with some supervised learning approaches on MC500.
Related papers
- Differentiating Choices via Commonality for Multiple-Choice Question Answering [54.04315943420376]
Multiple-choice question answering can provide valuable clues for choosing the right answer.
Existing models often rank each choice separately, overlooking the context provided by other choices.
We propose a novel model by differentiating choices through identifying and eliminating their commonality, called DCQA.
arXiv Detail & Related papers (2024-08-21T12:05:21Z) - Artifacts or Abduction: How Do LLMs Answer Multiple-Choice Questions Without the Question? [15.308093827770474]
We probe if large language models (LLMs) can perform multiple-choice question answering (MCQA) with choices-only prompts.
This prompt bests a majority baseline in 11/12 cases, with up to 0.33 accuracy gain.
We conduct an in-depth, black-box analysis on memorization, choice dynamics, and question inference.
arXiv Detail & Related papers (2024-02-19T19:38:58Z) - Improving Machine Reading Comprehension with Single-choice Decision and
Transfer Learning [18.81256990043713]
Multi-choice Machine Reading (MMRC) aims to select the correct answer from a set of options based on a given passage and question.
It is non-trivial to transfer knowledge from other MRC tasks such as SQuAD, Dream.
We reconstruct multi-choice to single-choice by training a binary classification to distinguish whether a certain answer is correct.
arXiv Detail & Related papers (2020-11-06T11:33:29Z) - Few-Shot Complex Knowledge Base Question Answering via Meta
Reinforcement Learning [55.08037694027792]
Complex question-answering (CQA) involves answering complex natural-language questions on a knowledge base (KB)
The conventional neural program induction (NPI) approach exhibits uneven performance when the questions have different types.
This paper proposes a meta-reinforcement learning approach to program induction in CQA to tackle the potential distributional bias in questions.
arXiv Detail & Related papers (2020-10-29T18:34:55Z) - Retrieve, Program, Repeat: Complex Knowledge Base Question Answering via
Alternate Meta-learning [56.771557756836906]
We present a novel method that automatically learns a retrieval model alternately with the programmer from weak supervision.
Our system leads to state-of-the-art performance on a large-scale task for complex question answering over knowledge bases.
arXiv Detail & Related papers (2020-10-29T18:28:16Z) - Unsupervised Multi-hop Question Answering by Question Generation [108.61653629883753]
MQA-QG is an unsupervised framework that can generate human-like multi-hop training data.
Using only generated training data, we can train a competent multi-hop QA which achieves 61% and 83% of the supervised learning performance.
arXiv Detail & Related papers (2020-10-23T19:13:47Z) - MS-Ranker: Accumulating Evidence from Potentially Correct Candidates for
Answer Selection [59.95429407899612]
We propose a novel reinforcement learning based multi-step ranking model, named MS-Ranker.
We explicitly consider the potential correctness of candidates and update the evidence with a gating mechanism.
Our model significantly outperforms existing methods that do not rely on external resources.
arXiv Detail & Related papers (2020-10-10T10:36:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.