The Life Cycle of Knowledge in Big Language Models: A Survey
- URL: http://arxiv.org/abs/2303.07616v1
- Date: Tue, 14 Mar 2023 03:49:22 GMT
- Title: The Life Cycle of Knowledge in Big Language Models: A Survey
- Authors: Boxi Cao, Hongyu Lin, Xianpei Han, Le Sun
- Abstract summary: Pre-trained language models (PLMs) have raised significant attention about how knowledge can be acquired, maintained, updated and used by language models.
Despite the enormous amount of related studies, there still lacks a unified view of how knowledge circulates within language models throughout the learning, tuning, and application processes.
We revisit PLMs as knowledge-based systems by dividing the life circle of knowledge in PLMs into five critical periods, and investigating how knowledge circulates when it is built, maintained and used.
- Score: 39.955688635216056
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge plays a critical role in artificial intelligence. Recently, the
extensive success of pre-trained language models (PLMs) has raised significant
attention about how knowledge can be acquired, maintained, updated and used by
language models. Despite the enormous amount of related studies, there still
lacks a unified view of how knowledge circulates within language models
throughout the learning, tuning, and application processes, which may prevent
us from further understanding the connections between current progress or
realizing existing limitations. In this survey, we revisit PLMs as
knowledge-based systems by dividing the life circle of knowledge in PLMs into
five critical periods, and investigating how knowledge circulates when it is
built, maintained and used. To this end, we systematically review existing
studies of each period of the knowledge life cycle, summarize the main
challenges and current limitations, and discuss future directions.
Related papers
- Knowledge Mechanisms in Large Language Models: A Survey and Perspective [88.51320482620679]
This paper reviews knowledge mechanism analysis from a novel taxonomy including knowledge utilization and evolution.
We discuss what knowledge LLMs have learned, the reasons for the fragility of parametric knowledge, and the potential dark knowledge (hypothesis) that will be challenging to address.
arXiv Detail & Related papers (2024-07-22T06:15:59Z) - Chain-of-Knowledge: Integrating Knowledge Reasoning into Large Language Models by Learning from Knowledge Graphs [55.317267269115845]
Chain-of-Knowledge (CoK) is a comprehensive framework for knowledge reasoning.
CoK includes methodologies for both dataset construction and model learning.
We conduct extensive experiments with KnowReason.
arXiv Detail & Related papers (2024-06-30T10:49:32Z) - Large Language Models are Limited in Out-of-Context Knowledge Reasoning [65.72847298578071]
Large Language Models (LLMs) possess extensive knowledge and strong capabilities in performing in-context reasoning.
This paper focuses on a significant aspect of out-of-context reasoning: Out-of-Context Knowledge Reasoning (OCKR), which is to combine multiple knowledge to infer new knowledge.
arXiv Detail & Related papers (2024-06-11T15:58:59Z) - GrowOVER: How Can LLMs Adapt to Growing Real-World Knowledge? [36.987716816134984]
We propose GrowOVER-QA and GrowOVER-Dialogue, dynamic open-domain QA and dialogue benchmarks that undergo a continuous cycle of updates.
Our research indicates that retrieval-augmented language models (RaLMs) struggle with knowledge that has not been trained on or recently updated.
We introduce a novel retrieval-interactive language model framework, where the language model evaluates and reflects on its answers for further re-retrieval.
arXiv Detail & Related papers (2024-06-09T01:16:04Z) - Towards Incremental Learning in Large Language Models: A Critical Review [0.0]
This review provides a comprehensive analysis of incremental learning in Large Language Models.
It synthesizes the state-of-the-art incremental learning paradigms, including continual learning, meta-learning, parameter-efficient learning, and mixture-of-experts learning.
An important finding is that many of these approaches do not update the core model, and none of them update incrementally in real-time.
arXiv Detail & Related papers (2024-04-28T20:44:53Z) - Online Continual Knowledge Learning for Language Models [3.654507524092343]
Large Language Models (LLMs) serve as repositories of extensive world knowledge, enabling them to perform tasks such as question-answering and fact-checking.
Online Continual Knowledge Learning (OCKL) aims to manage the dynamic nature of world knowledge in LMs under real-time constraints.
arXiv Detail & Related papers (2023-11-16T07:31:03Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Knowledge-augmented Deep Learning and Its Applications: A Survey [60.221292040710885]
knowledge-augmented deep learning (KADL) aims to identify domain knowledge and integrate it into deep models for data-efficient, generalizable, and interpretable deep learning.
This survey subsumes existing works and offers a bird's-eye view of research in the general area of knowledge-augmented deep learning.
arXiv Detail & Related papers (2022-11-30T03:44:15Z) - LM-CORE: Language Models with Contextually Relevant External Knowledge [13.451001884972033]
We argue that storing large amounts of knowledge in the model parameters is sub-optimal given the ever-growing amounts of knowledge and resource requirements.
We present LM-CORE -- a general framework to achieve this -- that allows textitdecoupling of the language model training from the external knowledge source.
Experimental results show that LM-CORE, having access to external knowledge, achieves significant and robust outperformance over state-of-the-art knowledge-enhanced language models on knowledge probing tasks.
arXiv Detail & Related papers (2022-08-12T18:59:37Z) - Towards Continual Knowledge Learning of Language Models [11.000501711652829]
Large Language Models (LMs) are known to encode world knowledge in their parameters as they pretrain on a vast amount of web corpus.
In real-world scenarios, the world knowledge stored in the LMs can quickly become outdated as the world changes.
We formulate a new continual learning (CL) problem called Continual Knowledge Learning (CKL)
arXiv Detail & Related papers (2021-10-07T07:00:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.