AutoKG: Efficient Automated Knowledge Graph Generation for Language
Models
- URL: http://arxiv.org/abs/2311.14740v1
- Date: Wed, 22 Nov 2023 08:58:25 GMT
- Title: AutoKG: Efficient Automated Knowledge Graph Generation for Language
Models
- Authors: Bohan Chen and Andrea L. Bertozzi
- Abstract summary: AutoKG is a lightweight and efficient approach for automated knowledge graph construction.
Preliminary experiments demonstrate that AutoKG offers a more comprehensive and interconnected knowledge retrieval mechanism.
- Score: 9.665916299598338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional methods of linking large language models (LLMs) to knowledge
bases via the semantic similarity search often fall short of capturing complex
relational dynamics. To address these limitations, we introduce AutoKG, a
lightweight and efficient approach for automated knowledge graph (KG)
construction. For a given knowledge base consisting of text blocks, AutoKG
first extracts keywords using a LLM and then evaluates the relationship weight
between each pair of keywords using graph Laplace learning. We employ a hybrid
search scheme combining vector similarity and graph-based associations to
enrich LLM responses. Preliminary experiments demonstrate that AutoKG offers a
more comprehensive and interconnected knowledge retrieval mechanism compared to
the semantic similarity search, thereby enhancing the capabilities of LLMs in
generating more insightful and relevant outputs.
Related papers
- LLM-assisted Vector Similarity Search [0.0]
This paper explores a hybrid approach combining vector similarity search with Large Language Models (LLMs) to enhance search accuracy and relevance.
Experiments on structured datasets demonstrate that while vector similarity search alone performs well for straightforward queries, the LLM-assisted approach excels in processing complex queries involving constraints, negations, or conceptual requirements.
arXiv Detail & Related papers (2024-12-25T08:17:37Z) - Harnessing Large Language Models for Knowledge Graph Question Answering via Adaptive Multi-Aspect Retrieval-Augmentation [81.18701211912779]
We introduce an Adaptive Multi-Aspect Retrieval-augmented over KGs (Amar) framework.
This method retrieves knowledge including entities, relations, and subgraphs, and converts each piece of retrieved text into prompt embeddings.
Our method has achieved state-of-the-art performance on two common datasets.
arXiv Detail & Related papers (2024-12-24T16:38:04Z) - Tree-of-Traversals: A Zero-Shot Reasoning Algorithm for Augmenting Black-box Language Models with Knowledge Graphs [72.89652710634051]
Knowledge graphs (KGs) complement Large Language Models (LLMs) by providing reliable, structured, domain-specific, and up-to-date external knowledge.
We introduce Tree-of-Traversals, a novel zero-shot reasoning algorithm that enables augmentation of black-box LLMs with one or more KGs.
arXiv Detail & Related papers (2024-07-31T06:01:24Z) - Leveraging Large Language Models for Semantic Query Processing in a Scholarly Knowledge Graph [1.7418328181959968]
The proposed research aims to develop an innovative semantic query processing system.
It enables users to obtain comprehensive information about research works produced by Computer Science (CS) researchers at the Australian National University.
arXiv Detail & Related papers (2024-05-24T09:19:45Z) - Relation-aware Ensemble Learning for Knowledge Graph Embedding [68.94900786314666]
We propose to learn an ensemble by leveraging existing methods in a relation-aware manner.
exploring these semantics using relation-aware ensemble leads to a much larger search space than general ensemble methods.
We propose a divide-search-combine algorithm RelEns-DSC that searches the relation-wise ensemble weights independently.
arXiv Detail & Related papers (2023-10-13T07:40:12Z) - Comparative Analysis of Contextual Relation Extraction based on Deep
Learning Models [0.0]
An efficient and accurate CRE system is essential for creating domain knowledge in the biomedical industry.
Deep learning techniques have been used to identify the appropriate semantic relation based on the context from multiple sentences.
This paper explores the analysis of various deep learning models that are used for relation extraction.
arXiv Detail & Related papers (2023-09-13T09:05:09Z) - UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph [89.98762327725112]
Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
arXiv Detail & Related papers (2022-12-02T04:08:09Z) - BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from
Pretrained Language Models [65.51390418485207]
We propose a new approach of harvesting massive KGs of arbitrary relations from pretrained LMs.
With minimal input of a relation definition, the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge.
We deploy the approach to harvest KGs of over 400 new relations from different LMs.
arXiv Detail & Related papers (2022-06-28T19:46:29Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.