SKG-LLM: Developing a Mathematical Model for Stroke Knowledge Graph Construction Using Large Language Models
- URL: http://arxiv.org/abs/2503.06475v1
- Date: Sun, 09 Mar 2025 06:25:37 GMT
- Title: SKG-LLM: Developing a Mathematical Model for Stroke Knowledge Graph Construction Using Large Language Models
- Authors: Ali Sarabadani, Kheirolah Rahsepar Fard, Hamid Dalvand,
- Abstract summary: A knowledge graph (KG) is constructed from stroke-related articles using mathematical and large language models (LLMs)<n> SKG-LLM extracts and organizes complex relationships from the biomedical literature, using it to increase the accuracy and depth of KG in stroke research.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The purpose of this study is to introduce SKG-LLM. A knowledge graph (KG) is constructed from stroke-related articles using mathematical and large language models (LLMs). SKG-LLM extracts and organizes complex relationships from the biomedical literature, using it to increase the accuracy and depth of KG in stroke research. In the proposed method, GPT-4 was used for data pre-processing, and the extraction of embeddings was also done by GPT-4 in the whole KG construction process. The performance of the proposed model was tested with two evaluation criteria: Precision and Recall. For further validation of the proposed model, GPT-4 was used. Compared with Wikidata and WN18RR, the proposed KG-LLM approach performs better, especially in precision and recall. By including GPT-4 in the preprocessing process, the SKG-LLM model achieved a precision score of 0.906 and a recall score of 0.923. Expert reviews further improved the results and increased precision to 0.923 and recall to 0.918. The knowledge graph constructed by SKG-LLM contains 2692 nodes and 5012 edges, which are 13 distinct types of nodes and 24 types of edges.
Related papers
- Knowledge Graph-based Retrieval-Augmented Generation for Schema Matching [3.7548609506798485]
We propose a Knowledge Graph-based Retrieval-Augmented Generation model for large language models (LLMs) matching.<n>In particular, KG-RAG4SM introduces novel vector-based, graph-based, and query-based graph retrievals.<n>We show that KG-RAG4SM outperforms the state-of-the-art (SOTA) methods by 35.89% and 30.50% in terms of precision and F1 score on the MIMIC dataset.
arXiv Detail & Related papers (2025-01-15T09:32:37Z) - Generating Knowledge Graphs from Large Language Models: A Comparative Study of GPT-4, LLaMA 2, and BERT [0.0]
This paper introduces a novel approach leveraging large language models (LLMs) to generate Knowledge Graphs (KGs) for GraphRAGs.<n>We evaluate the models' ability to generate high-quality KGs using metrics such as Precision, Recall, F1-Score, Graph Edit Distance, and Semantic Similarity.<n>Results demonstrate that GPT-4 achieves superior semantic fidelity and structural accuracy, LLaMA 2 excels in lightweight, domain-specific graphs, and BERT provides insights into challenges in entity-relationship modeling.
arXiv Detail & Related papers (2024-12-10T11:05:26Z) - Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency [59.6772484292295]
Knowledge graphs (KGs) generated by large language models (LLMs) are increasingly valuable for Retrieval-Augmented Generation (RAG) applications.
Existing KG extraction methods rely on prompt-based approaches, which are inefficient for processing large-scale corpora.
We propose SynthKG, a multi-step, document-level synthesis KG workflow based on LLMs.
We also design a novel graph-based retrieval framework for RAG.
arXiv Detail & Related papers (2024-10-22T00:47:54Z) - GRIN: GRadient-INformed MoE [132.87651078514122]
Mixture-of-Experts (MoE) models scale more effectively than dense models due to sparse computation through expert routing.
We introduce GRIN (GRadient-INformed MoE training), which incorporates sparse gradient estimation for expert routing.
Our model, with only 6.6B activated parameters, outperforms a 7B dense model and matches the performance of a 14B dense model trained on the same data.
arXiv Detail & Related papers (2024-09-18T17:00:20Z) - KG-FIT: Knowledge Graph Fine-Tuning Upon Open-World Knowledge [63.19837262782962]
Knowledge Graph Embedding (KGE) techniques are crucial in learning compact representations of entities and relations within a knowledge graph.
This study introduces KG-FIT, which builds a semantically coherent hierarchical structure of entity clusters.
Experiments on the benchmark datasets FB15K-237, YAGO3-10, and PrimeKG demonstrate the superiority of KG-FIT over state-of-the-art pre-trained language model-based methods.
arXiv Detail & Related papers (2024-05-26T03:04:26Z) - Aligning GPTRec with Beyond-Accuracy Goals with Reinforcement Learning [67.71952251641545]
GPTRec is an alternative to the Top-K model for item-by-item recommendations.
We show that GPTRec offers a better tradeoff between accuracy and secondary metrics than classic greedy re-ranking techniques.
Our experiments on two datasets show that GPTRec's Next-K generation approach offers a better tradeoff between accuracy and secondary metrics than classic greedy re-ranking techniques.
arXiv Detail & Related papers (2024-03-07T19:47:48Z) - Image and Data Mining in Reticular Chemistry Using GPT-4V [5.440238820637818]
GPT-4V is a large language model featuring enhanced vision capabilities, accessible through ChatGPT or an API.
This study demonstrates the remarkable ability of GPT-4V to navigate and obtain complex data for metal-organic frameworks.
arXiv Detail & Related papers (2023-12-09T05:05:25Z) - Disconnected Emerging Knowledge Graph Oriented Inductive Link Prediction [0.0]
We propose a novel model entitled DEKG-ILP (Disconnected Emerging Knowledge Graph Oriented Inductive Link Prediction)
The module CLRM is developed to extract global relation-based semantic features that are shared between original KGs and DEKGs.
The module GSM is proposed to extract the local subgraph topological information around each link in KGs.
arXiv Detail & Related papers (2022-09-03T10:58:24Z) - KGxBoard: Explainable and Interactive Leaderboard for Evaluation of
Knowledge Graph Completion Models [76.01814380927507]
KGxBoard is an interactive framework for performing fine-grained evaluation on meaningful subsets of the data.
In our experiments, we highlight the findings with the use of KGxBoard, which would have been impossible to detect with standard averaged single-score metrics.
arXiv Detail & Related papers (2022-08-23T15:11:45Z) - DSKReG: Differentiable Sampling on Knowledge Graph for Recommendation
with Relational GNN [59.160401038969795]
We propose differentiable sampling on Knowledge Graph for Recommendation with GNN (DSKReG)
We devise a differentiable sampling strategy, which enables the selection of relevant items to be jointly optimized with the model training procedure.
The experimental results demonstrate that our model outperforms state-of-the-art KG-based recommender systems.
arXiv Detail & Related papers (2021-08-26T16:19:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.