Procedural Reading Comprehension with Attribute-Aware Context Flow
- URL: http://arxiv.org/abs/2003.13878v1
- Date: Tue, 31 Mar 2020 00:06:29 GMT
- Title: Procedural Reading Comprehension with Attribute-Aware Context Flow
- Authors: Aida Amini, Antoine Bosselut, Bhavana Dalvi Mishra, Yejin Choi,
Hannaneh Hajishirzi
- Abstract summary: Procedural texts often describe processes that happen over entities.
We introduce an algorithm for procedural reading comprehension by translating the text into a general formalism.
- Score: 85.34405161075276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Procedural texts often describe processes (e.g., photosynthesis and cooking)
that happen over entities (e.g., light, food). In this paper, we introduce an
algorithm for procedural reading comprehension by translating the text into a
general formalism that represents processes as a sequence of transitions over
entity attributes (e.g., location, temperature). Leveraging pre-trained
language models, our model obtains entity-aware and attribute-aware
representations of the text by joint prediction of entity attributes and their
transitions. Our model dynamically obtains contextual encodings of the
procedural text exploiting information that is encoded about previous and
current states to predict the transition of a certain attribute which can be
identified as a span of text or from a pre-defined set of classes. Moreover,
our model achieves state of the art results on two procedural reading
comprehension datasets, namely ProPara and npn-cooking
Related papers
- Neural Sequence-to-Sequence Modeling with Attention by Leveraging Deep Learning Architectures for Enhanced Contextual Understanding in Abstractive Text Summarization [0.0]
This paper presents a novel framework for abstractive TS of single documents.
It integrates three dominant aspects: structure, semantic, and neural-based approaches.
Results indicate significant improvements in handling rare and OOV words.
arXiv Detail & Related papers (2024-04-08T18:33:59Z) - Contextualized Diffusion Models for Text-Guided Image and Video Generation [67.69171154637172]
Conditional diffusion models have exhibited superior performance in high-fidelity text-guided visual generation and editing.
We propose a novel and general contextualized diffusion model (ContextDiff) by incorporating the cross-modal context encompassing interactions and alignments between text condition and visual sample.
We generalize our model to both DDPMs and DDIMs with theoretical derivations, and demonstrate the effectiveness of our model in evaluations with two challenging tasks: text-to-image generation, and text-to-video editing.
arXiv Detail & Related papers (2024-02-26T15:01:16Z) - Text Revision by On-the-Fly Representation Optimization [76.11035270753757]
Current state-of-the-art methods formulate these tasks as sequence-to-sequence learning problems.
We present an iterative in-place editing approach for text revision, which requires no parallel data.
It achieves competitive and even better performance than state-of-the-art supervised methods on text simplification.
arXiv Detail & Related papers (2022-04-15T07:38:08Z) - Syntax-Enhanced Pre-trained Model [49.1659635460369]
We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa.
Existing methods utilize syntax of text either in the pre-training stage or in the fine-tuning stage, so that they suffer from discrepancy between the two stages.
We present a model that utilizes the syntax of text in both pre-training and fine-tuning stages.
arXiv Detail & Related papers (2020-12-28T06:48:04Z) - Generating Image Descriptions via Sequential Cross-Modal Alignment
Guided by Human Gaze [6.6358421117698665]
We take as our starting point a state-of-the-art image captioning system.
We develop several model variants that exploit information from human gaze patterns recorded during language production.
Our experiments and analyses confirm that better descriptions can be obtained by exploiting gaze-driven attention.
arXiv Detail & Related papers (2020-11-09T17:45:32Z) - Knowledge-Aware Procedural Text Understanding with Multi-Stage Training [110.93934567725826]
We focus on the task of procedural text understanding, which aims to comprehend such documents and track entities' states and locations during a process.
Two challenges, the difficulty of commonsense reasoning and data insufficiency, still remain unsolved.
We propose a novel KnOwledge-Aware proceduraL text understAnding (KOALA) model, which effectively leverages multiple forms of external knowledge.
arXiv Detail & Related papers (2020-09-28T10:28:40Z) - Temporal Embeddings and Transformer Models for Narrative Text
Understanding [72.88083067388155]
We present two approaches to narrative text understanding for character relationship modelling.
The temporal evolution of these relations is described by dynamic word embeddings, that are designed to learn semantic changes over time.
A supervised learning approach based on the state-of-the-art transformer model BERT is used instead to detect static relations between characters.
arXiv Detail & Related papers (2020-03-19T14:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.