Fusing Context Into Knowledge Graph for Commonsense Reasoning
- URL: http://arxiv.org/abs/2012.04808v1
- Date: Wed, 9 Dec 2020 00:57:49 GMT
- Title: Fusing Context Into Knowledge Graph for Commonsense Reasoning
- Authors: Yichong Xu, Chenguang Zhu, Ruochen Xu, Yang Liu, Michael Zeng, Xuedong
Huang
- Abstract summary: We propose to utilize external entity description to provide contextual information for graph entities.
For the CommonsenseQA task, our model first extracts concepts from the question and choice, and then finds a related triple between these concepts.
We achieve state-of-the-art results in the CommonsenseQA dataset with an accuracy of 80.7% (single model) and 83.3% (ensemble model) on the official leaderboard.
- Score: 21.33294077354958
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Commonsense reasoning requires a model to make presumptions about world
events via language understanding. Many methods couple pre-trained language
models with knowledge graphs in order to combine the merits in language
modeling and entity-based relational learning. However, although a knowledge
graph contains rich structural information, it lacks the context to provide a
more precise understanding of the concepts and relations. This creates a gap
when fusing knowledge graphs into language modeling, especially in the scenario
of insufficient paired text-knowledge data. In this paper, we propose to
utilize external entity description to provide contextual information for graph
entities. For the CommonsenseQA task, our model first extracts concepts from
the question and choice, and then finds a related triple between these
concepts. Next, it retrieves the descriptions of these concepts from Wiktionary
and feed them as additional input to a pre-trained language model, together
with the triple. The resulting model can attain much more effective commonsense
reasoning capability, achieving state-of-the-art results in the CommonsenseQA
dataset with an accuracy of 80.7% (single model) and 83.3% (ensemble model) on
the official leaderboard.
Related papers
- Compositional Generalization with Grounded Language Models [9.96679221246835]
Grounded language models use external sources of information, such as knowledge graphs, to meet some of the general challenges associated with pre-training.
We develop a procedure for generating natural language questions paired with knowledge graphs that targets different aspects of compositionality.
arXiv Detail & Related papers (2024-06-07T14:56:51Z) - Context versus Prior Knowledge in Language Models [49.17879668110546]
Language models often need to integrate prior knowledge learned during pretraining and new information presented in context.
We propose two mutual information-based metrics to measure a model's dependency on a context and on its prior about an entity.
arXiv Detail & Related papers (2024-04-06T13:46:53Z) - KGLM: Integrating Knowledge Graph Structure in Language Models for Link
Prediction [0.0]
We introduce a new entity/relation embedding layer that learns to differentiate distinctive entity and relation types.
We show that further pre-training the language models with this additional embedding layer using the triples extracted from the knowledge graph, followed by the standard fine-tuning phase sets a new state-of-the-art performance for the link prediction task on the benchmark datasets.
arXiv Detail & Related papers (2022-11-04T20:38:12Z) - Joint Language Semantic and Structure Embedding for Knowledge Graph
Completion [66.15933600765835]
We propose to jointly embed the semantics in the natural language description of the knowledge triplets with their structure information.
Our method embeds knowledge graphs for the completion task via fine-tuning pre-trained language models.
Our experiments on a variety of knowledge graph benchmarks have demonstrated the state-of-the-art performance of our method.
arXiv Detail & Related papers (2022-09-19T02:41:02Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - GreaseLM: Graph REASoning Enhanced Language Models for Question
Answering [159.9645181522436]
GreaseLM is a new model that fuses encoded representations from pretrained LMs and graph neural networks over multiple layers of modality interaction operations.
We show that GreaseLM can more reliably answer questions that require reasoning over both situational constraints and structured knowledge, even outperforming models 8x larger.
arXiv Detail & Related papers (2022-01-21T19:00:05Z) - Zero-shot Commonsense Question Answering with Cloze Translation and
Consistency Optimization [20.14487209460865]
We investigate four translation methods that can translate natural questions into cloze-style sentences.
We show that our methods are complementary datasets to a knowledge base improved model, and combining them can lead to state-of-the-art zero-shot performance.
arXiv Detail & Related papers (2022-01-01T07:12:49Z) - JAKET: Joint Pre-training of Knowledge Graph and Language Understanding [73.43768772121985]
We propose a novel joint pre-training framework, JAKET, to model both the knowledge graph and language.
The knowledge module and language module provide essential information to mutually assist each other.
Our design enables the pre-trained model to easily adapt to unseen knowledge graphs in new domains.
arXiv Detail & Related papers (2020-10-02T05:53:36Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.