LambdaKG: A Library for Pre-trained Language Model-Based Knowledge Graph
Embeddings
- URL: http://arxiv.org/abs/2210.00305v3
- Date: Thu, 14 Sep 2023 07:06:03 GMT
- Title: LambdaKG: A Library for Pre-trained Language Model-Based Knowledge Graph
Embeddings
- Authors: Xin Xie, Zhoubo Li, Xiaohan Wang, Zekun Xi, Ningyu Zhang
- Abstract summary: We present LambdaKG, a library for knowledge graph completion, question answering, recommendation, and knowledge probing.
LambdaKG is publicly open-sourced at https://github.com/zjunlp/PromptKG/tree/main/lambdaKG, with a demo video at http://deepke.zjukg.cn/lambdakg.mp4.
- Score: 32.371086902570205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Graphs (KGs) often have two characteristics: heterogeneous graph
structure and text-rich entity/relation information. Text-based KG embeddings
can represent entities by encoding descriptions with pre-trained language
models, but no open-sourced library is specifically designed for KGs with PLMs
at present. In this paper, we present LambdaKG, a library for KGE that equips
with many pre-trained language models (e.g., BERT, BART, T5, GPT-3), and
supports various tasks (e.g., knowledge graph completion, question answering,
recommendation, and knowledge probing). LambdaKG is publicly open-sourced at
https://github.com/zjunlp/PromptKG/tree/main/lambdaKG, with a demo video at
http://deepke.zjukg.cn/lambdakg.mp4 and long-term maintenance.
Related papers
- A Prompt-Based Knowledge Graph Foundation Model for Universal In-Context Reasoning [17.676185326247946]
We propose a prompt-based KG foundation model via in-context learning, namely KG-ICL, to achieve a universal reasoning ability.
To encode prompt graphs with the generalization ability to unseen entities and relations in queries, we first propose a unified tokenizer.
Then, we propose two message passing neural networks to perform prompt encoding and KG reasoning, respectively.
arXiv Detail & Related papers (2024-10-16T06:47:18Z) - Less is More: Making Smaller Language Models Competent Subgraph Retrievers for Multi-hop KGQA [51.3033125256716]
We model the subgraph retrieval task as a conditional generation task handled by small language models.
Our base generative subgraph retrieval model, consisting of only 220M parameters, competitive retrieval performance compared to state-of-the-art models.
Our largest 3B model, when plugged with an LLM reader, sets new SOTA end-to-end performance on both the WebQSP and CWQ benchmarks.
arXiv Detail & Related papers (2024-10-08T15:22:36Z) - Tree-of-Traversals: A Zero-Shot Reasoning Algorithm for Augmenting Black-box Language Models with Knowledge Graphs [72.89652710634051]
Knowledge graphs (KGs) complement Large Language Models (LLMs) by providing reliable, structured, domain-specific, and up-to-date external knowledge.
We introduce Tree-of-Traversals, a novel zero-shot reasoning algorithm that enables augmentation of black-box LLMs with one or more KGs.
arXiv Detail & Related papers (2024-07-31T06:01:24Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - PyGraft: Configurable Generation of Synthetic Schemas and Knowledge
Graphs at Your Fingertips [3.5923669681271257]
PyGraft is a Python-based tool that generates customized, domain-agnostic schemas and KGs.
We aim to empower the generation of a more diverse array of KGs for benchmarking novel approaches in areas such as graph-based machine learning (ML)
In ML, this should foster a more holistic evaluation of model performance and generalization capability, thereby going beyond the limited collection of available benchmarks.
arXiv Detail & Related papers (2023-09-07T13:00:09Z) - Deep Bidirectional Language-Knowledge Graph Pretraining [159.9645181522436]
DRAGON is a self-supervised approach to pretraining a deeply joint language-knowledge foundation model from text and KG at scale.
Our model takes pairs of text segments and relevant KG subgraphs as input and bidirectionally fuses information from both modalities.
arXiv Detail & Related papers (2022-10-17T18:02:52Z) - NeuralKG: An Open Source Library for Diverse Representation Learning of
Knowledge Graphs [28.21229825389071]
NeuralKG is an open-source library for diverse representation learning of knowledge graphs.
It implements three different series of Knowledge Graph Embedding (KGE) methods, including conventional KGEs, GNN-based KGEs, and Rule-based KGEs.
arXiv Detail & Related papers (2022-02-25T09:13:13Z) - Language Models are Open Knowledge Graphs [75.48081086368606]
Recent deep language models automatically acquire knowledge from large-scale corpora via pre-training.
In this paper, we propose an unsupervised method to cast the knowledge contained within language models into KGs.
We show that KGs are constructed with a single forward pass of the pre-trained language models (without fine-tuning) over the corpora.
arXiv Detail & Related papers (2020-10-22T18:01:56Z) - Toward Subgraph-Guided Knowledge Graph Question Generation with Graph
Neural Networks [53.58077686470096]
Knowledge graph (KG) question generation (QG) aims to generate natural language questions from KGs and target answers.
In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers.
arXiv Detail & Related papers (2020-04-13T15:43:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.