Tracing Origins: Coref-aware Machine Reading Comprehension
- URL: http://arxiv.org/abs/2110.07961v1
- Date: Fri, 15 Oct 2021 09:28:35 GMT
- Title: Tracing Origins: Coref-aware Machine Reading Comprehension
- Authors: Baorong Huang, Zhuosheng Zhang, Hai Zhao
- Abstract summary: We imitated the human's reading process in connecting the anaphoric expressions and leverage the coreference information to enhance the word embeddings from the pre-trained model.
We demonstrated that the explicit incorporation of the coreference information in fine-tuning stage performed better than the incorporation of the coreference information in training a pre-trained language models.
- Score: 43.352833140317486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine reading comprehension is a heavily-studied research and test field
for evaluating new pre-trained models and fine-tuning strategies, and recent
studies have enriched the pre-trained models with syntactic, semantic and other
linguistic information to improve the performance of the model. In this paper,
we imitated the human's reading process in connecting the anaphoric expressions
and explicitly leverage the coreference information to enhance the word
embeddings from the pre-trained model, in order to highlight the coreference
mentions that must be identified for coreference-intensive question answering
in QUOREF, a relatively new dataset that is specifically designed to evaluate
the coreference-related performance of a model. We used an additional BERT
layer to focus on the coreference mentions, and a Relational Graph
Convolutional Network to model the coreference relations. We demonstrated that
the explicit incorporation of the coreference information in fine-tuning stage
performed better than the incorporation of the coreference information in
training a pre-trained language models.
Related papers
- Dwell in the Beginning: How Language Models Embed Long Documents for Dense Retrieval [31.9252824152673]
We build on previous research that demonstrated loss of information in the middle of input sequences for causal language models.
We examine positional biases at various stages of training for an encoder-decoder model, including language model pre-training, contrastive pre-training, and contrastive fine-tuning.
arXiv Detail & Related papers (2024-04-05T15:16:16Z) - RAVEN: In-Context Learning with Retrieval-Augmented Encoder-Decoder Language Models [57.12888828853409]
RAVEN is a model that combines retrieval-augmented masked language modeling and prefix language modeling.
Fusion-in-Context Learning enables the model to leverage more in-context examples without requiring additional training.
Our work underscores the potential of retrieval-augmented encoder-decoder language models for in-context learning.
arXiv Detail & Related papers (2023-08-15T17:59:18Z) - Scalable Learning of Latent Language Structure With Logical Offline
Cycle Consistency [71.42261918225773]
Conceptually, LOCCO can be viewed as a form of self-learning where the semantic being trained is used to generate annotations for unlabeled text.
As an added bonus, the annotations produced by LOCCO can be trivially repurposed to train a neural text generation model.
arXiv Detail & Related papers (2023-05-31T16:47:20Z) - How much pretraining data do language models need to learn syntax? [12.668478784932878]
Transformers-based pretrained language models achieve outstanding results in many well-known NLU benchmarks.
We study the impact of pretraining data size on the knowledge of the models using RoBERTa.
arXiv Detail & Related papers (2021-09-07T15:51:39Z) - Layer-wise Analysis of a Self-supervised Speech Representation Model [26.727775920272205]
Self-supervised learning approaches have been successful for pre-training speech representation models.
Not much has been studied about the type or extent of information encoded in the pre-trained representations themselves.
arXiv Detail & Related papers (2021-07-10T02:13:25Z) - Prototypical Representation Learning for Relation Extraction [56.501332067073065]
This paper aims to learn predictive, interpretable, and robust relation representations from distantly-labeled data.
We learn prototypes for each relation from contextual information to best explore the intrinsic semantics of relations.
Results on several relation learning tasks show that our model significantly outperforms the previous state-of-the-art relational models.
arXiv Detail & Related papers (2021-03-22T08:11:43Z) - Learning Better Sentence Representation with Syntax Information [0.0]
We propose a novel approach to combining syntax information with a pre-trained language model.
Our model achieves 91.2% accuracy, outperforming the baseline model by 37.8% on sentence completion task.
arXiv Detail & Related papers (2021-01-09T12:15:08Z) - Unsupervised Paraphrasing with Pretrained Language Models [85.03373221588707]
We propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting.
Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking.
We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair and the ParaNMT datasets.
arXiv Detail & Related papers (2020-10-24T11:55:28Z) - Enhancing Dialogue Generation via Multi-Level Contrastive Learning [57.005432249952406]
We propose a multi-level contrastive learning paradigm to model the fine-grained quality of the responses with respect to the query.
A Rank-aware (RC) network is designed to construct the multi-level contrastive optimization objectives.
We build a Knowledge Inference (KI) component to capture the keyword knowledge from the reference during training and exploit such information to encourage the generation of informative words.
arXiv Detail & Related papers (2020-09-19T02:41:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.