Enhancing Multi-Hop Knowledge Graph Reasoning through Reward Shaping
Techniques
- URL: http://arxiv.org/abs/2403.05801v1
- Date: Sat, 9 Mar 2024 05:34:07 GMT
- Title: Enhancing Multi-Hop Knowledge Graph Reasoning through Reward Shaping
Techniques
- Authors: Chen Li, Haotian Zheng, Yiping Sun, Cangqing Wang, Liqiang Yu, Che
Chang, Xinyu Tian, Bo Liu
- Abstract summary: This research elucidates the employment of reinforcement learning strategies, notably the REINFORCE algorithm, to navigate the intricacies inherent in multi-hop Knowledge Graphs (KG-R)
By partitioning the Unified Medical Language System (UMLS) benchmark dataset into rich and sparse subsets, we investigate the efficacy of pre-trained BERT embeddings and Prompt Learning methodologies to refine the reward shaping process.
- Score: 5.561202401558972
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the realm of computational knowledge representation, Knowledge Graph
Reasoning (KG-R) stands at the forefront of facilitating sophisticated
inferential capabilities across multifarious domains. The quintessence of this
research elucidates the employment of reinforcement learning (RL) strategies,
notably the REINFORCE algorithm, to navigate the intricacies inherent in
multi-hop KG-R. This investigation critically addresses the prevalent
challenges introduced by the inherent incompleteness of Knowledge Graphs (KGs),
which frequently results in erroneous inferential outcomes, manifesting as both
false negatives and misleading positives. By partitioning the Unified Medical
Language System (UMLS) benchmark dataset into rich and sparse subsets, we
investigate the efficacy of pre-trained BERT embeddings and Prompt Learning
methodologies to refine the reward shaping process. This approach not only
enhances the precision of multi-hop KG-R but also sets a new precedent for
future research in the field, aiming to improve the robustness and accuracy of
knowledge inference within complex KG frameworks. Our work contributes a novel
perspective to the discourse on KG reasoning, offering a methodological
advancement that aligns with the academic rigor and scholarly aspirations of
the Natural journal, promising to invigorate further advancements in the realm
of computational knowledge representation.
Related papers
- Graph-constrained Reasoning: Faithful Reasoning on Knowledge Graphs with Large Language Models [83.28737898989694]
Large language models (LLMs) struggle with faithful reasoning due to knowledge gaps and hallucinations.
We introduce graph-constrained reasoning (GCR), a novel framework that bridges structured knowledge in KGs with unstructured reasoning in LLMs.
GCR achieves state-of-the-art performance and exhibits strong zero-shot generalizability to unseen KGs without additional training.
arXiv Detail & Related papers (2024-10-16T22:55:17Z) - GIVE: Structured Reasoning with Knowledge Graph Inspired Veracity Extrapolation [108.2008975785364]
Graph Inspired Veracity Extrapolation (GIVE) is a novel reasoning framework that integrates the parametric and non-parametric memories.
Our method facilitates a more logical and step-wise reasoning approach akin to experts' problem-solving, rather than gold answer retrieval.
arXiv Detail & Related papers (2024-10-11T03:05:06Z) - A review of feature selection strategies utilizing graph data structures and knowledge graphs [1.9570926122713395]
Feature selection in Knowledge Graphs (KGs) are increasingly utilized in diverse domains, including biomedical research, Natural Language Processing (NLP), and personalized recommendation systems.
This paper delves into the methodologies for feature selection within KGs, emphasizing their roles in enhancing machine learning (ML) model efficacy, hypothesis generation, and interpretability.
The paper concludes by charting future directions, including the development of scalable, dynamic feature selection algorithms and the integration of explainable AI principles to foster transparency and trust in KG-driven models.
arXiv Detail & Related papers (2024-06-21T04:50:02Z) - KG-RAG: Bridging the Gap Between Knowledge and Creativity [0.0]
Large Language Model Agents (LMAs) face issues such as information hallucinations, catastrophic forgetting, and limitations in processing long contexts.
This paper introduces a KG-RAG (Knowledge Graph-Retrieval Augmented Generation) pipeline to enhance the knowledge capabilities of LMAs.
Preliminary experiments on the ComplexWebQuestions dataset demonstrate notable improvements in the reduction of hallucinated content.
arXiv Detail & Related papers (2024-05-20T14:03:05Z) - Empowering Small-Scale Knowledge Graphs: A Strategy of Leveraging General-Purpose Knowledge Graphs for Enriched Embeddings [3.7759315989669058]
We introduce a framework for enriching embeddings of small-scale domain-specific Knowledge Graphs with well-established general-purpose KGs.
Experimental evaluations demonstrate a notable enhancement, with up to a 44% increase observed in the Hits@10 metric.
This relatively unexplored research direction can catalyze more frequent incorporation of KGs in knowledge-intensive tasks.
arXiv Detail & Related papers (2024-05-17T12:46:23Z) - FecTek: Enhancing Term Weight in Lexicon-Based Retrieval with Feature Context and Term-level Knowledge [54.61068946420894]
We introduce an innovative method by introducing FEature Context and TErm-level Knowledge modules.
To effectively enrich the feature context representations of term weight, the Feature Context Module (FCM) is introduced.
We also develop a term-level knowledge guidance module (TKGM) for effectively utilizing term-level knowledge to intelligently guide the modeling process of term weight.
arXiv Detail & Related papers (2024-04-18T12:58:36Z) - ODA: Observation-Driven Agent for integrating LLMs and Knowledge Graphs [4.3508051546373]
We introduce Observation-Driven Agent (ODA), a novel AI framework tailored for tasks involving knowledge graphs (KGs)
ODA incorporates KG reasoning abilities via global observation, which enhances reasoning capabilities through a cyclical paradigm of observation, action, and reflection.
ODA demonstrates state-of-the-art performance on several datasets, notably achieving accuracy improvements of 12.87% and 8.9%.
arXiv Detail & Related papers (2024-04-11T12:16:16Z) - An Enhanced Prompt-Based LLM Reasoning Scheme via Knowledge Graph-Integrated Collaboration [7.3636034708923255]
This study proposes a collaborative training-free reasoning scheme involving tight cooperation between Knowledge Graph (KG) and Large Language Models (LLMs)
Through such a cooperative approach, our scheme achieves more reliable knowledge-based reasoning and facilitates the tracing of the reasoning results.
arXiv Detail & Related papers (2024-02-07T15:56:17Z) - Knowledge Graph Context-Enhanced Diversified Recommendation [53.3142545812349]
This research explores the realm of diversified RecSys within the intricate context of knowledge graphs (KG)
Our contributions include introducing an innovative metric, Entity Coverage, and Relation Coverage, which effectively quantifies diversity within the KG domain.
In tandem with this, we introduce a novel technique named Conditional Alignment and Uniformity (CAU) which encodes KG item embeddings while preserving contextual integrity.
arXiv Detail & Related papers (2023-10-20T03:18:57Z) - BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from
Pretrained Language Models [65.51390418485207]
We propose a new approach of harvesting massive KGs of arbitrary relations from pretrained LMs.
With minimal input of a relation definition, the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge.
We deploy the approach to harvest KGs of over 400 new relations from different LMs.
arXiv Detail & Related papers (2022-06-28T19:46:29Z) - Knowledge Graph Augmented Network Towards Multiview Representation
Learning for Aspect-based Sentiment Analysis [96.53859361560505]
We propose a knowledge graph augmented network (KGAN) to incorporate external knowledge with explicitly syntactic and contextual information.
KGAN captures the sentiment feature representations from multiple perspectives, i.e., context-, syntax- and knowledge-based.
Experiments on three popular ABSA benchmarks demonstrate the effectiveness and robustness of our KGAN.
arXiv Detail & Related papers (2022-01-13T08:25:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.