Exploring In-Context Learning Capabilities of Foundation Models for
Generating Knowledge Graphs from Text
- URL: http://arxiv.org/abs/2305.08804v1
- Date: Mon, 15 May 2023 17:10:19 GMT
- Title: Exploring In-Context Learning Capabilities of Foundation Models for
Generating Knowledge Graphs from Text
- Authors: Hanieh Khorashadizadeh, Nandana Mihindukulasooriya, Sanju Tiwari,
Jinghua Groppe and Sven Groppe
- Abstract summary: This paper aims to improve the state of the art of automatic construction and completion of knowledge graphs from text.
In this context, one emerging paradigm is in-context learning where a language model is used as it is with a prompt.
- Score: 3.114960935006655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graphs can represent information about the real-world using
entities and their relations in a structured and semantically rich manner and
they enable a variety of downstream applications such as question-answering,
recommendation systems, semantic search, and advanced analytics. However, at
the moment, building a knowledge graph involves a lot of manual effort and thus
hinders their application in some situations and the automation of this process
might benefit especially for small organizations. Automatically generating
structured knowledge graphs from a large volume of natural language is still a
challenging task and the research on sub-tasks such as named entity extraction,
relation extraction, entity and relation linking, and knowledge graph
construction aims to improve the state of the art of automatic construction and
completion of knowledge graphs from text. The recent advancement of foundation
models with billions of parameters trained in a self-supervised manner with
large volumes of training data that can be adapted to a variety of downstream
tasks has helped to demonstrate high performance on a large range of Natural
Language Processing (NLP) tasks. In this context, one emerging paradigm is
in-context learning where a language model is used as it is with a prompt that
provides instructions and some examples to perform a task without changing the
parameters of the model using traditional approaches such as fine-tuning. This
way, no computing resources are needed for re-training/fine-tuning the models
and the engineering effort is minimal. Thus, it would be beneficial to utilize
such capabilities for generating knowledge graphs from text.
Related papers
- Generative retrieval-augmented ontologic graph and multi-agent
strategies for interpretive large language model-based materials design [0.0]
Transformer neural networks show promising capabilities, in particular for uses in materials analysis, design and manufacturing.
Here we explore the use of large language models (LLMs) as a tool that can support engineering analysis of materials.
arXiv Detail & Related papers (2023-10-30T20:31:50Z) - SINC: Self-Supervised In-Context Learning for Vision-Language Tasks [64.44336003123102]
We propose a framework to enable in-context learning in large language models.
A meta-model can learn on self-supervised prompts consisting of tailored demonstrations.
Experiments show that SINC outperforms gradient-based methods in various vision-language tasks.
arXiv Detail & Related papers (2023-07-15T08:33:08Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - Iterative Zero-Shot LLM Prompting for Knowledge Graph Construction [104.29108668347727]
This paper proposes an innovative knowledge graph generation approach that leverages the potential of the latest generative large language models.
The approach is conveyed in a pipeline that comprises novel iterative zero-shot and external knowledge-agnostic strategies.
We claim that our proposal is a suitable solution for scalable and versatile knowledge graph construction and may be applied to different and novel contexts.
arXiv Detail & Related papers (2023-07-03T16:01:45Z) - Enhancing Knowledge Graph Construction Using Large Language Models [0.0]
This paper analyzes how the current advances in foundational LLM, like ChatGPT, can be compared with the specialized pretrained models, like REBEL, for joint entity and relation extraction.
We created pipelines for the automatic creation of Knowledge Graphs from raw texts, and our findings indicate that using advanced LLM models can improve the accuracy of the process of creating these graphs from unstructured text.
arXiv Detail & Related papers (2023-05-08T12:53:06Z) - KGLM: Integrating Knowledge Graph Structure in Language Models for Link
Prediction [0.0]
We introduce a new entity/relation embedding layer that learns to differentiate distinctive entity and relation types.
We show that further pre-training the language models with this additional embedding layer using the triples extracted from the knowledge graph, followed by the standard fine-tuning phase sets a new state-of-the-art performance for the link prediction task on the benchmark datasets.
arXiv Detail & Related papers (2022-11-04T20:38:12Z) - Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph
Construction [57.854498238624366]
We propose a retrieval-augmented approach, which retrieves schema-aware Reference As Prompt (RAP) for data-efficient knowledge graph construction.
RAP can dynamically leverage schema and knowledge inherited from human-annotated and weak-supervised data as a prompt for each sample.
arXiv Detail & Related papers (2022-10-19T16:40:28Z) - Joint Language Semantic and Structure Embedding for Knowledge Graph
Completion [66.15933600765835]
We propose to jointly embed the semantics in the natural language description of the knowledge triplets with their structure information.
Our method embeds knowledge graphs for the completion task via fine-tuning pre-trained language models.
Our experiments on a variety of knowledge graph benchmarks have demonstrated the state-of-the-art performance of our method.
arXiv Detail & Related papers (2022-09-19T02:41:02Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.