DictBERT: Dictionary Description Knowledge Enhanced Language Model
Pre-training via Contrastive Learning
- URL: http://arxiv.org/abs/2208.00635v1
- Date: Mon, 1 Aug 2022 06:43:19 GMT
- Title: DictBERT: Dictionary Description Knowledge Enhanced Language Model
Pre-training via Contrastive Learning
- Authors: Qianglong Chen, Feng-Lin Li, Guohai Xu, Ming Yan, Ji Zhang, Yin Zhang
- Abstract summary: Pre-trained language models (PLMs) are shown to be lacking in knowledge when dealing with knowledge driven tasks.
We propose textbfDictBERT, a novel approach that enhances PLMs with dictionary knowledge.
We evaluate our approach on a variety of knowledge driven and language understanding tasks, including NER, relation extraction, CommonsenseQA, OpenBookQA and GLUE.
- Score: 18.838291575019504
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Although pre-trained language models (PLMs) have achieved state-of-the-art
performance on various natural language processing (NLP) tasks, they are shown
to be lacking in knowledge when dealing with knowledge driven tasks. Despite
the many efforts made for injecting knowledge into PLMs, this problem remains
open. To address the challenge, we propose \textbf{DictBERT}, a novel approach
that enhances PLMs with dictionary knowledge which is easier to acquire than
knowledge graph (KG). During pre-training, we present two novel pre-training
tasks to inject dictionary knowledge into PLMs via contrastive learning:
\textit{dictionary entry prediction} and \textit{entry description
discrimination}. In fine-tuning, we use the pre-trained DictBERT as a plugin
knowledge base (KB) to retrieve implicit knowledge for identified entries in an
input sequence, and infuse the retrieved knowledge into the input to enhance
its representation via a novel extra-hop attention mechanism. We evaluate our
approach on a variety of knowledge driven and language understanding tasks,
including NER, relation extraction, CommonsenseQA, OpenBookQA and GLUE.
Experimental results demonstrate that our model can significantly improve
typical PLMs: it gains a substantial improvement of 0.5\%, 2.9\%, 9.0\%, 7.1\%
and 3.3\% on BERT-large respectively, and is also effective on RoBERTa-large.
Related papers
- TRELM: Towards Robust and Efficient Pre-training for Knowledge-Enhanced Language Models [31.209774088374374]
This paper introduces TRELM, a Robust and Efficient Pre-training framework for Knowledge-Enhanced Language Models.
We employ a robust approach to inject knowledge triples and employ a knowledge-augmented memory bank to capture valuable information.
We show that TRELM reduces pre-training time by at least 50% and outperforms other KEPLMs in knowledge probing tasks and multiple knowledge-aware language understanding tasks.
arXiv Detail & Related papers (2024-03-17T13:04:35Z) - Knowledge Rumination for Pre-trained Language Models [77.55888291165462]
We propose a new paradigm dubbed Knowledge Rumination to help the pre-trained language model utilize related latent knowledge without retrieving it from the external corpus.
We apply the proposed knowledge rumination to various language models, including RoBERTa, DeBERTa, and GPT-3.
arXiv Detail & Related papers (2023-05-15T15:47:09Z) - UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language
Models [100.4659557650775]
We propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge.
With both forms of knowledge injected, UNTER gains continuous improvements on a series of knowledge-driven NLP tasks.
arXiv Detail & Related papers (2023-05-02T17:33:28Z) - A Survey of Knowledge Enhanced Pre-trained Language Models [78.56931125512295]
We present a comprehensive review of Knowledge Enhanced Pre-trained Language Models (KE-PLMs)
For NLU, we divide the types of knowledge into four categories: linguistic knowledge, text knowledge, knowledge graph (KG) and rule knowledge.
The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods.
arXiv Detail & Related papers (2022-11-11T04:29:02Z) - Knowledge Prompting in Pre-trained Language Model for Natural Language
Understanding [24.315130086787374]
We propose a knowledge-prompting-based PLM framework KP-PLM.
This framework can be flexibly combined with existing mainstream PLMs.
To further leverage the factual knowledge from these prompts, we propose two novel knowledge-aware self-supervised tasks.
arXiv Detail & Related papers (2022-10-16T13:36:57Z) - Knowledgeable Salient Span Mask for Enhancing Language Models as
Knowledge Base [51.55027623439027]
We develop two solutions to help the model learn more knowledge from unstructured text in a fully self-supervised manner.
To our best knowledge, we are the first to explore fully self-supervised learning of knowledge in continual pre-training.
arXiv Detail & Related papers (2022-04-17T12:33:34Z) - TegTok: Augmenting Text Generation via Task-specific and Open-world
Knowledge [83.55215993730326]
We propose augmenting TExt Generation via Task-specific and Open-world Knowledge (TegTok) in a unified framework.
Our model selects knowledge entries from two types of knowledge sources through dense retrieval and then injects them into the input encoding and output decoding stages respectively.
arXiv Detail & Related papers (2022-03-16T10:37:59Z) - CoLAKE: Contextualized Language and Knowledge Embedding [81.90416952762803]
We propose the Contextualized Language and Knowledge Embedding (CoLAKE)
CoLAKE jointly learns contextualized representation for both language and knowledge with the extended objective.
We conduct experiments on knowledge-driven tasks, knowledge probing tasks, and language understanding tasks.
arXiv Detail & Related papers (2020-10-01T11:39:32Z) - REALM: Retrieval-Augmented Language Model Pre-Training [37.3178586179607]
We augment language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend over documents from a large corpus such as Wikipedia.
For the first time, we show how to pre-train such a knowledge retriever in an unsupervised manner.
We demonstrate the effectiveness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the challenging task of Open-domain Question Answering (Open-QA)
arXiv Detail & Related papers (2020-02-10T18:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.