Deep Bidirectional Language-Knowledge Graph Pretraining
- URL: http://arxiv.org/abs/2210.09338v2
- Date: Wed, 19 Oct 2022 01:56:31 GMT
- Title: Deep Bidirectional Language-Knowledge Graph Pretraining
- Authors: Michihiro Yasunaga, Antoine Bosselut, Hongyu Ren, Xikun Zhang,
Christopher D Manning, Percy Liang, Jure Leskovec
- Abstract summary: DRAGON is a self-supervised approach to pretraining a deeply joint language-knowledge foundation model from text and KG at scale.
Our model takes pairs of text segments and relevant KG subgraphs as input and bidirectionally fuses information from both modalities.
- Score: 159.9645181522436
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pretraining a language model (LM) on text has been shown to help various
downstream NLP tasks. Recent works show that a knowledge graph (KG) can
complement text data, offering structured background knowledge that provides a
useful scaffold for reasoning. However, these works are not pretrained to learn
a deep fusion of the two modalities at scale, limiting the potential to acquire
fully joint representations of text and KG. Here we propose DRAGON (Deep
Bidirectional Language-Knowledge Graph Pretraining), a self-supervised approach
to pretraining a deeply joint language-knowledge foundation model from text and
KG at scale. Specifically, our model takes pairs of text segments and relevant
KG subgraphs as input and bidirectionally fuses information from both
modalities. We pretrain this model by unifying two self-supervised reasoning
tasks, masked language modeling and KG link prediction. DRAGON outperforms
existing LM and LM+KG models on diverse downstream tasks including question
answering across general and biomedical domains, with +5% absolute gain on
average. In particular, DRAGON achieves notable performance on complex
reasoning about language and knowledge (+10% on questions involving long
contexts or multi-step reasoning) and low-resource QA (+8% on OBQA and
RiddleSense), and new state-of-the-art results on various BioNLP tasks. Our
code and trained models are available at
https://github.com/michiyasunaga/dragon.
Related papers
- Can LLMs be Good Graph Judger for Knowledge Graph Construction? [33.958327252291]
In this paper, we propose GraphJudger, a knowledge graph construction framework to address the aforementioned challenges.
We introduce three innovative modules in our method, which are entity-centric iterative text denoising, knowledge aware instruction tuning and graph judgement.
Experiments conducted on two general text-graph pair datasets and one domain-specific text-graph pair dataset show superior performances compared to baseline methods.
arXiv Detail & Related papers (2024-11-26T12:46:57Z) - Retrieval-Augmented Language Model for Extreme Multi-Label Knowledge Graph Link Prediction [2.6749568255705656]
Extrapolation in large language models (LLMs) for open-ended inquiry encounters two pivotal issues.
Existing works attempt to tackle the problem by augmenting the input of a smaller language model with information from a knowledge graph.
We propose a new task, the extreme multi-label KG link prediction task, to enable a model to perform extrapolation with multiple responses.
arXiv Detail & Related papers (2024-05-21T10:10:56Z) - ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained
Language Models for Question Answering over Knowledge Graph [142.42275983201978]
We propose a subgraph-aware self-attention mechanism to imitate the GNN for performing structured reasoning.
We also adopt an adaptation tuning strategy to adapt the model parameters with 20,000 subgraphs with synthesized questions.
Experiments show that ReasoningLM surpasses state-of-the-art models by a large margin, even with fewer updated parameters and less training data.
arXiv Detail & Related papers (2023-12-30T07:18:54Z) - Using Large Language Models for Zero-Shot Natural Language Generation
from Knowledge Graphs [4.56877715768796]
We show that ChatGPT achieves near state-of-the-art performance on some measures of the WebNLG 2020 challenge.
We also show that there is a significant connection between what the LLM already knows about the data it is parsing and the quality of the output text.
arXiv Detail & Related papers (2023-07-14T12:45:03Z) - Harnessing Explanations: LLM-to-LM Interpreter for Enhanced
Text-Attributed Graph Representation Learning [51.90524745663737]
A key innovation is our use of explanations as features, which can be used to boost GNN performance on downstream tasks.
Our method achieves state-of-the-art results on well-established TAG datasets.
Our method significantly speeds up training, achieving a 2.88 times improvement over the closest baseline on ogbn-arxiv.
arXiv Detail & Related papers (2023-05-31T03:18:03Z) - BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from
Pretrained Language Models [65.51390418485207]
We propose a new approach of harvesting massive KGs of arbitrary relations from pretrained LMs.
With minimal input of a relation definition, the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge.
We deploy the approach to harvest KGs of over 400 new relations from different LMs.
arXiv Detail & Related papers (2022-06-28T19:46:29Z) - KELM: Knowledge Enhanced Pre-Trained Language Representations with
Message Passing on Hierarchical Relational Graphs [26.557447199727758]
We propose a novel knowledge-aware language model framework based on fine-tuning process.
Our model can efficiently incorporate world knowledge from KGs into existing language models such as BERT.
arXiv Detail & Related papers (2021-09-09T12:39:17Z) - Few-shot Knowledge Graph-to-Text Generation with Pretrained Language
Models [42.38563175680914]
This paper studies how to automatically generate a natural language text that describes the facts in knowledge graph (KG)
Considering the few-shot setting, we leverage the excellent capacities of pretrained language models (PLMs) in language understanding and generation.
arXiv Detail & Related papers (2021-06-03T06:48:00Z) - Language Models are Open Knowledge Graphs [75.48081086368606]
Recent deep language models automatically acquire knowledge from large-scale corpora via pre-training.
In this paper, we propose an unsupervised method to cast the knowledge contained within language models into KGs.
We show that KGs are constructed with a single forward pass of the pre-trained language models (without fine-tuning) over the corpora.
arXiv Detail & Related papers (2020-10-22T18:01:56Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.