No Answer is Better Than Wrong Answer: A Reflection Model for Document
Level Machine Reading Comprehension
- URL: http://arxiv.org/abs/2009.12056v2
- Date: Tue, 29 Sep 2020 09:29:57 GMT
- Title: No Answer is Better Than Wrong Answer: A Reflection Model for Document
Level Machine Reading Comprehension
- Authors: Xuguang Wang, Linjun Shou, Ming Gong, Nan Duan and Daxin Jiang
- Abstract summary: We propose a novel approach to handle all answer types systematically.
In particular, we propose a novel approach called Reflection Net which leverages a two-step training procedure to identify the no-answer and wrong-answer cases.
Our approach achieved the top 1 on both long and short answer leaderboard, with F1 scores of 77.2 and 64.1, respectively.
- Score: 92.57688872599998
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Natural Questions (NQ) benchmark set brings new challenges to Machine
Reading Comprehension: the answers are not only at different levels of
granularity (long and short), but also of richer types (including no-answer,
yes/no, single-span and multi-span). In this paper, we target at this challenge
and handle all answer types systematically. In particular, we propose a novel
approach called Reflection Net which leverages a two-step training procedure to
identify the no-answer and wrong-answer cases. Extensive experiments are
conducted to verify the effectiveness of our approach. At the time of paper
writing (May.~20,~2020), our approach achieved the top 1 on both long and short
answer leaderboard, with F1 scores of 77.2 and 64.1, respectively.
Related papers
- Generate-then-Ground in Retrieval-Augmented Generation for Multi-hop Question Answering [45.82437926569949]
Multi-Hop Question Answering tasks present a significant challenge for large language models.
We introduce a novel generate-then-ground (GenGround) framework to solve a multi-hop question.
arXiv Detail & Related papers (2024-06-21T06:26:38Z) - Answering Ambiguous Questions via Iterative Prompting [84.3426020642704]
In open-domain question answering, due to the ambiguity of questions, multiple plausible answers may exist.
One approach is to directly predict all valid answers, but this can struggle with balancing relevance and diversity.
We present AmbigPrompt to address the imperfections of existing approaches to answering ambiguous questions.
arXiv Detail & Related papers (2023-07-08T04:32:17Z) - Few-Shot Question Answering by Pretraining Span Selection [58.31911597824848]
We explore the more realistic few-shot setting, where only a few hundred training examples are available.
We show that standard span selection models perform poorly, highlighting the fact that current pretraining objective are far removed from question answering.
Our findings indicate that careful design of pretraining schemes and model architecture can have a dramatic effect on performance in the few-shot settings.
arXiv Detail & Related papers (2021-01-02T11:58:44Z) - A Clarifying Question Selection System from NTES_ALONG in Convai3
Challenge [8.656503175492375]
This paper presents the participation of NetEase Game AI Lab team for the ClariQ challenge at Search-oriented Conversational AI (SCAI) EMNLP workshop in 2020.
The challenge asks for a complete conversational information retrieval system that can understanding and generating clarification questions.
We propose a clarifying question selection system which consists of response understanding, candidate question recalling and clarifying question ranking.
arXiv Detail & Related papers (2020-10-27T11:22:53Z) - Document Modeling with Graph Attention Networks for Multi-grained
Machine Reading Comprehension [127.3341842928421]
Natural Questions is a new challenging machine reading comprehension benchmark.
It has two-grained answers, which are a long answer (typically a paragraph) and a short answer (one or more entities inside the long answer)
Existing methods treat these two sub-tasks individually during training while ignoring their dependencies.
We present a novel multi-grained machine reading comprehension framework that focuses on modeling documents at their hierarchical nature.
arXiv Detail & Related papers (2020-05-12T14:20:09Z) - RikiNet: Reading Wikipedia Pages for Natural Question Answering [101.505486822236]
We introduce a new model, called RikiNet, which reads Wikipedia pages for natural question answering.
On the Natural Questions dataset, a single RikiNet achieves 74.3 F1 and 57.9 F1 on long-answer and short-answer tasks.
An ensemble RikiNet obtains 76.1 F1 and 61.3 F1 on long-answer and short-answer tasks, achieving the best performance on the official NQ leaderboard.
arXiv Detail & Related papers (2020-04-30T03:29:21Z) - Retrospective Reader for Machine Reading Comprehension [90.6069071495214]
Machine reading comprehension (MRC) is an AI challenge that requires machine to determine the correct answers to questions based on a given passage.
When unanswerable questions are involved in the MRC task, an essential verification module called verifier is especially required in addition to the encoder.
This paper devotes itself to exploring better verifier design for the MRC task with unanswerable questions.
arXiv Detail & Related papers (2020-01-27T11:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.