Commonsense Evidence Generation and Injection in Reading Comprehension
- URL: http://arxiv.org/abs/2005.05240v1
- Date: Mon, 11 May 2020 16:31:08 GMT
- Title: Commonsense Evidence Generation and Injection in Reading Comprehension
- Authors: Ye Liu, Tao Yang, Zeyu You, Wei Fan and Philip S. Yu
- Abstract summary: We propose a Commonsense Evidence Generation and Injection framework in reading comprehension, named CEGI.
The framework injects two kinds of auxiliary commonsense evidence into comprehensive reading to equip the machine with the ability of rational thinking.
Experiments on the CosmosQA dataset demonstrate that the proposed CEGI model outperforms the current state-of-the-art approaches.
- Score: 57.31927095547153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human tackle reading comprehension not only based on the given context itself
but often rely on the commonsense beyond. To empower the machine with
commonsense reasoning, in this paper, we propose a Commonsense Evidence
Generation and Injection framework in reading comprehension, named CEGI. The
framework injects two kinds of auxiliary commonsense evidence into
comprehensive reading to equip the machine with the ability of rational
thinking. Specifically, we build two evidence generators: the first generator
aims to generate textual evidence via a language model; the other generator
aims to extract factual evidence (automatically aligned text-triples) from a
commonsense knowledge graph after graph completion. Those evidences incorporate
contextual commonsense and serve as the additional inputs to the model.
Thereafter, we propose a deep contextual encoder to extract semantic
relationships among the paragraph, question, option, and evidence. Finally, we
employ a capsule network to extract different linguistic units (word and
phrase) from the relations, and dynamically predict the optimal option based on
the extracted units. Experiments on the CosmosQA dataset demonstrate that the
proposed CEGI model outperforms the current state-of-the-art approaches and
achieves the accuracy (83.6%) on the leaderboard.
Related papers
- Enriching Relation Extraction with OpenIE [70.52564277675056]
Relation extraction (RE) is a sub-discipline of information extraction (IE)
In this work, we explore how recent approaches for open information extraction (OpenIE) may help to improve the task of RE.
Our experiments over two annotated corpora, KnowledgeNet and FewRel, demonstrate the improved accuracy of our enriched models.
arXiv Detail & Related papers (2022-12-19T11:26:23Z) - Lexically-constrained Text Generation through Commonsense Knowledge
Extraction and Injection [62.071938098215085]
We focus on the Commongen benchmark, wherein the aim is to generate a plausible sentence for a given set of input concepts.
We propose strategies for enhancing the semantic correctness of the generated text.
arXiv Detail & Related papers (2020-12-19T23:23:40Z) - Entity and Evidence Guided Relation Extraction for DocRED [33.69481141963074]
We pro-pose a joint training frameworkE2GRE(Entity and Evidence Guided Relation Extraction)for this task.
We introduce entity-guided sequences as inputs to a pre-trained language model (e.g. BERT, RoBERTa)
These entity-guided sequences help a pre-trained language model (LM) to focus on areas of the document related to the entity.
We evaluate our E2GRE approach on DocRED, a recently released large-scale dataset for relation extraction.
arXiv Detail & Related papers (2020-08-27T17:41:23Z) - Evidence-Aware Inferential Text Generation with Vector Quantised
Variational AutoEncoder [104.25716317141321]
We propose an approach that automatically finds evidence for an event from a large text corpus, and leverages the evidence to guide the generation of inferential texts.
Our approach provides state-of-the-art performance on both Event2Mind and ATOMIC datasets.
arXiv Detail & Related papers (2020-06-15T02:59:52Z) - Leveraging Declarative Knowledge in Text and First-Order Logic for
Fine-Grained Propaganda Detection [139.3415751957195]
We study the detection of propagandistic text fragments in news articles.
We introduce an approach to inject declarative knowledge of fine-grained propaganda techniques.
arXiv Detail & Related papers (2020-04-29T13:46:15Z) - Exploring Explainable Selection to Control Abstractive Summarization [51.74889133688111]
We develop a novel framework that focuses on explainability.
A novel pair-wise matrix captures the sentence interactions, centrality, and attribute scores.
A sentence-deployed attention mechanism in the abstractor ensures the final summary emphasizes the desired content.
arXiv Detail & Related papers (2020-04-24T14:39:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.