Extract, Integrate, Compete: Towards Verification Style Reading
Comprehension
- URL: http://arxiv.org/abs/2109.05149v1
- Date: Sat, 11 Sep 2021 01:34:59 GMT
- Title: Extract, Integrate, Compete: Towards Verification Style Reading
Comprehension
- Authors: Chen Zhang, Yuxuan Lai, Yansong Feng and Dongyan Zhao
- Abstract summary: We present a new verification style reading comprehension dataset named VGaokao from Chinese Language tests of Gaokao.
To address the challenges in VGaokao, we propose a novel Extract-Integrate-Compete approach.
- Score: 66.2551168928688
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present a new verification style reading comprehension
dataset named VGaokao from Chinese Language tests of Gaokao. Different from
existing efforts, the new dataset is originally designed for native speakers'
evaluation, thus requiring more advanced language understanding skills. To
address the challenges in VGaokao, we propose a novel Extract-Integrate-Compete
approach, which iteratively selects complementary evidence with a novel query
updating mechanism and adaptively distills supportive evidence, followed by a
pairwise competition to push models to learn the subtle difference among
similar text pieces. Experiments show that our methods outperform various
baselines on VGaokao with retrieved complementary evidence, while having the
merits of efficiency and explainability. Our dataset and code are released for
further research.
Related papers
- Persian Homograph Disambiguation: Leveraging ParsBERT for Enhanced Sentence Understanding with a Novel Word Disambiguation Dataset [0.0]
We introduce a novel dataset tailored for Persian homograph disambiguation.
Our work encompasses a thorough exploration of various embeddings, evaluated through the cosine similarity method.
We scrutinize the models' performance in terms of Accuracy, Recall, and F1 Score.
arXiv Detail & Related papers (2024-05-24T14:56:36Z) - Improving Long Text Understanding with Knowledge Distilled from Summarization Model [17.39913210351487]
We propose our emphGist Detector to leverage the gist detection ability of a summarization model.
Gist Detector first learns the gist detection knowledge distilled from a summarization model, and then produces gist-aware representations.
We evaluate our method on three different tasks: long document classification, distantly supervised open-domain question answering, and non-parallel text style transfer.
arXiv Detail & Related papers (2024-05-08T10:49:39Z) - Retrieval is Accurate Generation [99.24267226311157]
We introduce a novel method that selects context-aware phrases from a collection of supporting documents.
Our model achieves the best performance and the lowest latency among several retrieval-augmented baselines.
arXiv Detail & Related papers (2024-02-27T14:16:19Z) - Topic-to-essay generation with knowledge-based content selection [1.0625748132006634]
We propose a novel copy mechanism model with a content selection module that integrates rich semantic knowledge from the language model into the decoder.
Experimental results demonstrate that the proposed model can improve the generated text diversity by 35% to 59% compared to the state-of-the-art method.
arXiv Detail & Related papers (2024-02-26T02:14:42Z) - Ensemble Transfer Learning for Multilingual Coreference Resolution [60.409789753164944]
A problem that frequently occurs when working with a non-English language is the scarcity of annotated training data.
We design a simple but effective ensemble-based framework that combines various transfer learning techniques.
We also propose a low-cost TL method that bootstraps coreference resolution models by utilizing Wikipedia anchor texts.
arXiv Detail & Related papers (2023-01-22T18:22:55Z) - HanoiT: Enhancing Context-aware Translation via Selective Context [95.93730812799798]
Context-aware neural machine translation aims to use the document-level context to improve translation quality.
The irrelevant or trivial words may bring some noise and distract the model from learning the relationship between the current sentence and the auxiliary context.
We propose a novel end-to-end encoder-decoder model with a layer-wise selection mechanism to sift and refine the long document context.
arXiv Detail & Related papers (2023-01-17T12:07:13Z) - Learning to Select Bi-Aspect Information for Document-Scale Text Content
Manipulation [50.01708049531156]
We focus on a new practical task, document-scale text content manipulation, which is the opposite of text style transfer.
In detail, the input is a set of structured records and a reference text for describing another recordset.
The output is a summary that accurately describes the partial content in the source recordset with the same writing style of the reference.
arXiv Detail & Related papers (2020-02-24T12:52:10Z) - Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer [64.22926988297685]
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP)
In this paper, we explore the landscape of introducing transfer learning techniques for NLP by a unified framework that converts all text-based language problems into a text-to-text format.
arXiv Detail & Related papers (2019-10-23T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.