Using Large Language Models for Zero-Shot Natural Language Generation
from Knowledge Graphs
- URL: http://arxiv.org/abs/2307.07312v2
- Date: Tue, 22 Aug 2023 14:10:19 GMT
- Title: Using Large Language Models for Zero-Shot Natural Language Generation
from Knowledge Graphs
- Authors: Agnes Axelsson and Gabriel Skantze
- Abstract summary: We show that ChatGPT achieves near state-of-the-art performance on some measures of the WebNLG 2020 challenge.
We also show that there is a significant connection between what the LLM already knows about the data it is parsing and the quality of the output text.
- Score: 4.56877715768796
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In any system that uses structured knowledge graph (KG) data as its
underlying knowledge representation, KG-to-text generation is a useful tool for
turning parts of the graph data into text that can be understood by humans.
Recent work has shown that models that make use of pretraining on large amounts
of text data can perform well on the KG-to-text task even with relatively small
sets of training data on the specific graph-to-text task. In this paper, we
build on this concept by using large language models to perform zero-shot
generation based on nothing but the model's understanding of the triple
structure from what it can read. We show that ChatGPT achieves near
state-of-the-art performance on some measures of the WebNLG 2020 challenge, but
falls behind on others. Additionally, we compare factual, counter-factual and
fictional statements, and show that there is a significant connection between
what the LLM already knows about the data it is parsing and the quality of the
output text.
Related papers
- Parameter-Efficient Tuning Large Language Models for Graph Representation Learning [62.26278815157628]
We introduce Graph-aware.
Efficient Fine-Tuning - GPEFT, a novel approach for efficient graph representation learning.
We use a graph neural network (GNN) to encode structural information from neighboring nodes into a graph prompt.
We validate our approach through comprehensive experiments conducted on 8 different text-rich graphs, observing an average improvement of 2% in hit@1 and Mean Reciprocal Rank (MRR) in link prediction evaluations.
arXiv Detail & Related papers (2024-04-28T18:36:59Z) - Narrating Causal Graphs with Large Language Models [1.437446768735628]
This work explores the capability of large pretrained language models to generate text from causal graphs.
The causal reasoning encoded in these graphs can support applications as diverse as healthcare or marketing.
Results suggest users of generative AI can deploy future applications faster since similar performances are obtained when training a model with only a few examples.
arXiv Detail & Related papers (2024-03-11T19:19:59Z) - Large Language Models on Graphs: A Comprehensive Survey [77.16803297418201]
We provide a systematic review of scenarios and techniques related to large language models on graphs.
We first summarize potential scenarios of adopting LLMs on graphs into three categories, namely pure graphs, text-attributed graphs, and text-paired graphs.
We discuss the real-world applications of such methods and summarize open-source codes and benchmark datasets.
arXiv Detail & Related papers (2023-12-05T14:14:27Z) - Generating Faithful Text From a Knowledge Graph with Noisy Reference
Text [26.6775578332187]
We develop a KG-to-text generation model that can generate faithful natural-language text from a given graph.
Our framework incorporates two core ideas: Firstly, we utilize contrastive learning to enhance the model's ability to differentiate between faithful and hallucinated information in the text.
Secondly, we empower the decoder to control the level of hallucination in the generated text by employing a controllable text generation technique.
arXiv Detail & Related papers (2023-08-12T07:12:45Z) - Deep Bidirectional Language-Knowledge Graph Pretraining [159.9645181522436]
DRAGON is a self-supervised approach to pretraining a deeply joint language-knowledge foundation model from text and KG at scale.
Our model takes pairs of text segments and relevant KG subgraphs as input and bidirectionally fuses information from both modalities.
arXiv Detail & Related papers (2022-10-17T18:02:52Z) - GAP: A Graph-aware Language Model Framework for Knowledge Graph-to-Text
Generation [3.593955557310285]
Recent improvements in KG-to-text generation are due to auxiliary pre-training tasks designed to give the fine-tune task a boost in performance.
Here, we demonstrate that by fusing graph-aware elements into existing pre-trained language models, we are able to outperform state-of-the-art models and close the gap imposed by additional pre-training tasks.
arXiv Detail & Related papers (2022-04-13T23:53:37Z) - EventNarrative: A large-scale Event-centric Dataset for Knowledge
Graph-to-Text Generation [8.216976747904726]
EventNarrative consists of approximately 230,000 graphs and their corresponding natural language text, 6 times larger than the current largest parallel dataset.
Our aim is two-fold: help break new ground in event-centric research where data is lacking, and to give researchers a well-defined, large-scale dataset.
arXiv Detail & Related papers (2021-10-30T15:39:20Z) - Few-shot Knowledge Graph-to-Text Generation with Pretrained Language
Models [42.38563175680914]
This paper studies how to automatically generate a natural language text that describes the facts in knowledge graph (KG)
Considering the few-shot setting, we leverage the excellent capacities of pretrained language models (PLMs) in language understanding and generation.
arXiv Detail & Related papers (2021-06-03T06:48:00Z) - KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation [100.79870384880333]
We propose a knowledge-grounded pre-training (KGPT) to generate knowledge-enriched text.
We adopt three settings, namely fully-supervised, zero-shot, few-shot to evaluate its effectiveness.
Under zero-shot setting, our model achieves over 30 ROUGE-L on WebNLG while all other baselines fail.
arXiv Detail & Related papers (2020-10-05T19:59:05Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.