KELM: Knowledge Enhanced Pre-Trained Language Representations with
Message Passing on Hierarchical Relational Graphs
- URL: http://arxiv.org/abs/2109.04223v1
- Date: Thu, 9 Sep 2021 12:39:17 GMT
- Title: KELM: Knowledge Enhanced Pre-Trained Language Representations with
Message Passing on Hierarchical Relational Graphs
- Authors: Yinquan Lu, Haonan Lu, Guirong Fu, Qun Liu
- Abstract summary: We propose a novel knowledge-aware language model framework based on fine-tuning process.
Our model can efficiently incorporate world knowledge from KGs into existing language models such as BERT.
- Score: 26.557447199727758
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Incorporating factual knowledge into pre-trained language models (PLM) such
as BERT is an emerging trend in recent NLP studies. However, most of the
existing methods combine the external knowledge integration module with a
modified pre-training loss and re-implement the pre-training process on the
large-scale corpus. Re-pretraining these models is usually resource-consuming,
and difficult to adapt to another domain with a different knowledge graph (KG).
Besides, those works either cannot embed knowledge context dynamically
according to textual context or struggle with the knowledge ambiguity issue. In
this paper, we propose a novel knowledge-aware language model framework based
on fine-tuning process, which equips PLM with a unified knowledge-enhanced text
graph that contains both text and multi-relational sub-graphs extracted from
KG. We design a hierarchical relational-graph-based message passing mechanism,
which can allow the representations of injected KG and text to mutually update
each other and can dynamically select ambiguous mentioned entities that share
the same text. Our empirical results show that our model can efficiently
incorporate world knowledge from KGs into existing language models such as
BERT, and achieve significant improvement on the machine reading comprehension
(MRC) task compared with other knowledge-enhanced models.
Related papers
- Comprehending Knowledge Graphs with Large Language Models for Recommender Systems [13.270018897057293]
We propose a novel method called CoLaKG, which leverages large language models for knowledge-aware recommendation.
We first extract subgraphs centered on each item from the KG and convert them into textual inputs for the LLM.
The LLM then outputs its comprehension of these item-centered subgraphs, which are subsequently transformed into semantic embeddings.
arXiv Detail & Related papers (2024-10-16T04:44:34Z) - Knowledge Graphs and Pre-trained Language Models enhanced Representation Learning for Conversational Recommender Systems [58.561904356651276]
We introduce the Knowledge-Enhanced Entity Representation Learning (KERL) framework to improve the semantic understanding of entities for Conversational recommender systems.
KERL uses a knowledge graph and a pre-trained language model to improve the semantic understanding of entities.
KERL achieves state-of-the-art results in both recommendation and response generation tasks.
arXiv Detail & Related papers (2023-12-18T06:41:23Z) - Knowledge Graph-Augmented Language Models for Knowledge-Grounded
Dialogue Generation [58.65698688443091]
We propose SUbgraph Retrieval-augmented GEneration (SURGE), a framework for generating context-relevant and knowledge-grounded dialogues with Knowledge Graphs (KGs)
Our framework first retrieves the relevant subgraph from the KG, and then enforces consistency across facts by perturbing their word embeddings conditioned by the retrieved subgraph.
We validate our SURGE framework on OpendialKG and KOMODIS datasets, showing that it generates high-quality dialogues that faithfully reflect the knowledge from KG.
arXiv Detail & Related papers (2023-05-30T08:36:45Z) - Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph
Construction [57.854498238624366]
We propose a retrieval-augmented approach, which retrieves schema-aware Reference As Prompt (RAP) for data-efficient knowledge graph construction.
RAP can dynamically leverage schema and knowledge inherited from human-annotated and weak-supervised data as a prompt for each sample.
arXiv Detail & Related papers (2022-10-19T16:40:28Z) - KI-BERT: Infusing Knowledge Context for Better Language and Domain
Understanding [0.0]
We propose a technique to infuse knowledge context from knowledge graphs for conceptual and ambiguous entities into models based on transformer architecture.
Our novel technique project knowledge graph embedding in the homogeneous vector-space, introduces new token-types for entities, align entity position ids, and a selective attention mechanism.
We take BERT as a baseline model and implement "KnowledgeInfused BERT" by infusing knowledge context from ConceptNet and WordNet.
arXiv Detail & Related papers (2021-04-09T16:15:31Z) - Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced
Language Model Pre-training [22.534866015730664]
We verbalize the entire English Wikidata KG.
We show that verbalizing a comprehensive, encyclopedic KG like Wikidata can be used to integrate structured KGs and natural language corpora.
arXiv Detail & Related papers (2020-10-23T22:14:50Z) - JAKET: Joint Pre-training of Knowledge Graph and Language Understanding [73.43768772121985]
We propose a novel joint pre-training framework, JAKET, to model both the knowledge graph and language.
The knowledge module and language module provide essential information to mutually assist each other.
Our design enables the pre-trained model to easily adapt to unseen knowledge graphs in new domains.
arXiv Detail & Related papers (2020-10-02T05:53:36Z) - CokeBERT: Contextual Knowledge Selection and Embedding towards Enhanced
Pre-Trained Language Models [103.18329049830152]
We propose a novel framework named Coke to dynamically select contextual knowledge and embed knowledge context according to textual context.
Our experimental results show that Coke outperforms various baselines on typical knowledge-driven NLP tasks.
Coke can describe the semantics of text-related knowledge in a more interpretable form than the conventional PLMs.
arXiv Detail & Related papers (2020-09-29T12:29:04Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.