Knowledge Graphs as Context Sources for LLM-Based Explanations of
Learning Recommendations
- URL: http://arxiv.org/abs/2403.03008v1
- Date: Tue, 5 Mar 2024 14:41:12 GMT
- Title: Knowledge Graphs as Context Sources for LLM-Based Explanations of
Learning Recommendations
- Authors: Hasan Abu-Rasheed, Christian Weber, Madjid Fathi
- Abstract summary: Large language models (LLMs) and generative AI have recently opened new doors for generating human-like explanations.
This paper proposes an approach to utilize knowledge graphs (KG) as a source of factual context.
We utilize the semantic relations in the knowledge graph to offer curated knowledge about learning recommendations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the era of personalized education, the provision of comprehensible
explanations for learning recommendations is of a great value to enhance the
learner's understanding and engagement with the recommended learning content.
Large language models (LLMs) and generative AI in general have recently opened
new doors for generating human-like explanations, for and along learning
recommendations. However, their precision is still far away from acceptable in
a sensitive field like education. To harness the abilities of LLMs, while still
ensuring a high level of precision towards the intent of the learners, this
paper proposes an approach to utilize knowledge graphs (KG) as a source of
factual context, for LLM prompts, reducing the risk of model hallucinations,
and safeguarding against wrong or imprecise information, while maintaining an
application-intended learning context. We utilize the semantic relations in the
knowledge graph to offer curated knowledge about learning recommendations. With
domain-experts in the loop, we design the explanation as a textual template,
which is filled and completed by the LLM. Domain experts were integrated in the
prompt engineering phase as part of a study, to ensure that explanations
include information that is relevant to the learner. We evaluate our approach
quantitatively using Rouge-N and Rouge-L measures, as well as qualitatively
with experts and learners. Our results show an enhanced recall and precision of
the generated explanations compared to those generated solely by the GPT model,
with a greatly reduced risk of generating imprecise information in the final
learning explanation.
Related papers
- GIVE: Structured Reasoning with Knowledge Graph Inspired Veracity Extrapolation [108.2008975785364]
Graph Inspired Veracity Extrapolation (GIVE) is a novel reasoning framework that integrates the parametric and non-parametric memories.
Our method facilitates a more logical and step-wise reasoning approach akin to experts' problem-solving, rather than gold answer retrieval.
arXiv Detail & Related papers (2024-10-11T03:05:06Z) - Chain-of-Knowledge: Integrating Knowledge Reasoning into Large Language Models by Learning from Knowledge Graphs [55.317267269115845]
Chain-of-Knowledge (CoK) is a comprehensive framework for knowledge reasoning.
CoK includes methodologies for both dataset construction and model learning.
We conduct extensive experiments with KnowReason.
arXiv Detail & Related papers (2024-06-30T10:49:32Z) - Infusing Knowledge into Large Language Models with Contextual Prompts [5.865016596356753]
We propose a simple yet generalisable approach for knowledge infusion by generating prompts from the context in the input text.
Our experiments show the effectiveness of our approach which we evaluate by probing the fine-tuned LLMs.
arXiv Detail & Related papers (2024-03-03T11:19:26Z) - A Comprehensive Study of Knowledge Editing for Large Language Models [82.65729336401027]
Large Language Models (LLMs) have shown extraordinary capabilities in understanding and generating text that closely mirrors human communication.
This paper defines the knowledge editing problem and provides a comprehensive review of cutting-edge approaches.
We introduce a new benchmark, KnowEdit, for a comprehensive empirical evaluation of representative knowledge editing approaches.
arXiv Detail & Related papers (2024-01-02T16:54:58Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Leveraging Knowledge and Reinforcement Learning for Enhanced Reliability
of Language Models [10.10140327060947]
We explore a knowledge-guided LM ensembling approach that leverages reinforcement learning to integrate knowledge from ConceptNet and Wikipedia as knowledge graph embeddings.
This approach mimics human annotators resorting to external knowledge to compensate for information deficits in the datasets.
Across nine GLUE datasets, our research shows that ensembling strengthens reliability and accuracy scores, outperforming state of the art.
arXiv Detail & Related papers (2023-08-25T16:11:08Z) - BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from
Pretrained Language Models [65.51390418485207]
We propose a new approach of harvesting massive KGs of arbitrary relations from pretrained LMs.
With minimal input of a relation definition, the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge.
We deploy the approach to harvest KGs of over 400 new relations from different LMs.
arXiv Detail & Related papers (2022-06-28T19:46:29Z) - Generative Adversarial Zero-Shot Relational Learning for Knowledge
Graphs [96.73259297063619]
We consider a novel formulation, zero-shot learning, to free this cumbersome curation.
For newly-added relations, we attempt to learn their semantic features from their text descriptions.
We leverage Generative Adrial Networks (GANs) to establish the connection between text and knowledge graph domain.
arXiv Detail & Related papers (2020-01-08T01:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.