Few-shot Knowledge Graph-to-Text Generation with Pretrained Language
Models
- URL: http://arxiv.org/abs/2106.01623v1
- Date: Thu, 3 Jun 2021 06:48:00 GMT
- Title: Few-shot Knowledge Graph-to-Text Generation with Pretrained Language
Models
- Authors: Junyi Li, Tianyi Tang, Wayne Xin Zhao, Zhicheng Wei, Nicholas Jing
Yuan and Ji-Rong Wen
- Abstract summary: This paper studies how to automatically generate a natural language text that describes the facts in knowledge graph (KG)
Considering the few-shot setting, we leverage the excellent capacities of pretrained language models (PLMs) in language understanding and generation.
- Score: 42.38563175680914
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies how to automatically generate a natural language text that
describes the facts in knowledge graph (KG). Considering the few-shot setting,
we leverage the excellent capacities of pretrained language models (PLMs) in
language understanding and generation. We make three major technical
contributions, namely representation alignment for bridging the semantic gap
between KG encodings and PLMs, relation-biased KG linearization for deriving
better input representations, and multi-task learning for learning the
correspondence between KG and text. Extensive experiments on three benchmark
datasets have demonstrated the effectiveness of our model on KG-to-text
generation task. In particular, our model outperforms all comparison methods on
both fully-supervised and few-shot settings. Our code and datasets are
available at https://github.com/RUCAIBox/Few-Shot-KG2Text.
Related papers
- Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - FedMKGC: Privacy-Preserving Federated Multilingual Knowledge Graph
Completion [21.4302940596294]
Knowledge graph completion (KGC) aims to predict missing facts in knowledge graphs (KGs)
Previous methods that rely on transferring raw data among KGs raise privacy concerns.
We propose a new federated learning framework that implicitly aggregates knowledge from multiple KGs without demanding raw data exchange and entity alignment.
arXiv Detail & Related papers (2023-12-17T08:09:27Z) - Using Large Language Models for Zero-Shot Natural Language Generation
from Knowledge Graphs [4.56877715768796]
We show that ChatGPT achieves near state-of-the-art performance on some measures of the WebNLG 2020 challenge.
We also show that there is a significant connection between what the LLM already knows about the data it is parsing and the quality of the output text.
arXiv Detail & Related papers (2023-07-14T12:45:03Z) - Deep Bidirectional Language-Knowledge Graph Pretraining [159.9645181522436]
DRAGON is a self-supervised approach to pretraining a deeply joint language-knowledge foundation model from text and KG at scale.
Our model takes pairs of text segments and relevant KG subgraphs as input and bidirectionally fuses information from both modalities.
arXiv Detail & Related papers (2022-10-17T18:02:52Z) - From Discrimination to Generation: Knowledge Graph Completion with
Generative Transformer [41.69537736842654]
We provide an approach GenKGC, which converts knowledge graph completion to sequence-to-sequence generation task with the pre-trained language model.
We introduce relation-guided demonstration and entity-aware hierarchical decoding for better representation learning and fast inference.
We also release a new large-scale Chinese knowledge graph dataset AliopenKG500 for research purpose.
arXiv Detail & Related papers (2022-02-04T12:52:32Z) - KELM: Knowledge Enhanced Pre-Trained Language Representations with
Message Passing on Hierarchical Relational Graphs [26.557447199727758]
We propose a novel knowledge-aware language model framework based on fine-tuning process.
Our model can efficiently incorporate world knowledge from KGs into existing language models such as BERT.
arXiv Detail & Related papers (2021-09-09T12:39:17Z) - Language Models are Open Knowledge Graphs [75.48081086368606]
Recent deep language models automatically acquire knowledge from large-scale corpora via pre-training.
In this paper, we propose an unsupervised method to cast the knowledge contained within language models into KGs.
We show that KGs are constructed with a single forward pass of the pre-trained language models (without fine-tuning) over the corpora.
arXiv Detail & Related papers (2020-10-22T18:01:56Z) - KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation [100.79870384880333]
We propose a knowledge-grounded pre-training (KGPT) to generate knowledge-enriched text.
We adopt three settings, namely fully-supervised, zero-shot, few-shot to evaluate its effectiveness.
Under zero-shot setting, our model achieves over 30 ROUGE-L on WebNLG while all other baselines fail.
arXiv Detail & Related papers (2020-10-05T19:59:05Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.