Know More, Know Clearer: A Meta-Cognitive Framework for Knowledge Augmentation in Large Language Models
- URL: http://arxiv.org/abs/2602.12996v1
- Date: Fri, 13 Feb 2026 15:07:35 GMT
- Title: Know More, Know Clearer: A Meta-Cognitive Framework for Knowledge Augmentation in Large Language Models
- Authors: Hao Chen, Ye He, Yuchun Fan, Yukun Yan, Zhenghao Liu, Qingfu Zhu, Maosong Sun, Wanxiang Che,
- Abstract summary: We propose a novel meta-cognitive framework for reliable knowledge augmentation via differentiated intervention and alignment.<n>Our approach leverages internal cognitive signals to partition the knowledge space into mastered, confused, and missing regions, guiding targeted knowledge expansion.<n>Our framework consistently outperforms strong baselines, validating its rationality in not only enhancing knowledge capabilities but also fostering cognitive behaviors that better distinguish knowns from unknowns.
- Score: 80.21037538996553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge augmentation has significantly enhanced the performance of Large Language Models (LLMs) in knowledge-intensive tasks. However, existing methods typically operate on the simplistic premise that model performance equates with internal knowledge, overlooking the knowledge-confidence gaps that lead to overconfident errors or uncertain truths. To bridge this gap, we propose a novel meta-cognitive framework for reliable knowledge augmentation via differentiated intervention and alignment. Our approach leverages internal cognitive signals to partition the knowledge space into mastered, confused, and missing regions, guiding targeted knowledge expansion. Furthermore, we introduce a cognitive consistency mechanism to synchronize subjective certainty with objective accuracy, ensuring calibrated knowledge boundaries. Extensive experiments demonstrate the our framework consistently outperforms strong baselines, validating its rationality in not only enhancing knowledge capabilities but also fostering cognitive behaviors that better distinguish knowns from unknowns.
Related papers
- Probing the Knowledge Boundary: An Interactive Agentic Framework for Deep Knowledge Extraction [29.717986496967978]
We propose an interactive agentic framework to systematically extract and quantify the knowledge of Large Language Models.<n>Our method includes four adaptive exploration policies to probe knowledge at different granularities.<n>We observe a clear knowledge scaling law, where larger models consistently extract more knowledge.
arXiv Detail & Related papers (2026-02-01T01:43:44Z) - Knowledge-Level Consistency Reinforcement Learning: Dual-Fact Alignment for Long-Form Factuality [27.687276551678583]
Hallucination and factuality deficits remain key obstacles to the reliability of large language models.<n>We propose a novel framework that focuses on the knowledge consistency between the policy model's expressed knowledge and the base model's parametric knowledge.
arXiv Detail & Related papers (2025-09-28T09:23:06Z) - Towards Meta-Cognitive Knowledge Editing for Multimodal LLMs [71.8547241246169]
We introduce CogEdit, a novel benchmark designed to evaluate MLLMs' meta-cognitive knowledge editing abilities.<n>We propose MIND, a framework that constructs a meta-knowledge memory for self-awareness, employs game-theoretic interactions to monitor knowledge activation, and incorporates label refinement for noise-robust updates.
arXiv Detail & Related papers (2025-09-06T13:26:04Z) - Two Experts Are All You Need for Steering Thinking: Reinforcing Cognitive Effort in MoE Reasoning Models Without Additional Training [86.70255651945602]
We introduce a novel inference-time steering methodology called Reinforcing Cognitive Experts (RICE)<n>RICE aims to improve reasoning performance without additional training or complexs.<n> Empirical evaluations with leading MoE-based LRMs demonstrate noticeable and consistent improvements in reasoning accuracy, cognitive efficiency, and cross-domain generalization.
arXiv Detail & Related papers (2025-05-20T17:59:16Z) - Unveiling Knowledge Utilization Mechanisms in LLM-based Retrieval-Augmented Generation [77.10390725623125]
retrieval-augmented generation (RAG) is widely employed to expand their knowledge scope.<n>Since RAG has shown promise in knowledge-intensive tasks like open-domain question answering, its broader application to complex tasks and intelligent assistants has further advanced its utility.<n>We present a systematic investigation of the intrinsic mechanisms by which RAGs integrate internal (parametric) and external (retrieved) knowledge.
arXiv Detail & Related papers (2025-05-17T13:13:13Z) - UniKnow: A Unified Framework for Reliable Language Model Behavior across Parametric and External Knowledge [14.81530569173485]
We introduce UniKnow, a Unified framework for reliable LM behavior across parametric and external knowledge.<n>UniKnow enables controlled evaluation across knowledge scenarios such as knowledge conflict, distraction, and absence conditions.
arXiv Detail & Related papers (2025-02-19T11:49:23Z) - Decoding Knowledge in Large Language Models: A Framework for Categorization and Comprehension [14.039653386385519]
Large language models (LLMs) acquire, retain, and apply knowledge.<n>This paper introduces a novel framework, K-(CSA)2, which categorizes LLM knowledge along two dimensions: correctness and confidence.
arXiv Detail & Related papers (2025-01-02T16:34:10Z) - InfuserKI: Enhancing Large Language Models with Knowledge Graphs via Infuser-Guided Knowledge Integration [58.61492157691623]
Methods for integrating knowledge have been developed, which augment LLMs with domain-specific knowledge graphs through external modules.<n>Our research focuses on a novel problem: efficiently integrating unknown knowledge into LLMs without unnecessary overlap of known knowledge.<n>A risk of introducing new knowledge is the potential forgetting of existing knowledge.
arXiv Detail & Related papers (2024-02-18T03:36:26Z) - UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language
Models [100.4659557650775]
We propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge.
With both forms of knowledge injected, UNTER gains continuous improvements on a series of knowledge-driven NLP tasks.
arXiv Detail & Related papers (2023-05-02T17:33:28Z) - Lexical Knowledge Internalization for Neural Dialog Generation [36.27946635687281]
We propose knowledge internalization (KI), which aims to complement the lexical knowledge into neural dialog models.
To tackle the challenge due to the large scale of lexical knowledge, we adopt the contrastive learning approach and create an effective token-level lexical knowledge retriever.
arXiv Detail & Related papers (2022-05-04T08:23:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.