Beyond Single-Value Metrics: Evaluating and Enhancing LLM Unlearning with Cognitive Diagnosis
- URL: http://arxiv.org/abs/2502.13996v1
- Date: Wed, 19 Feb 2025 06:56:59 GMT
- Title: Beyond Single-Value Metrics: Evaluating and Enhancing LLM Unlearning with Cognitive Diagnosis
- Authors: Yicheng Lang, Kehan Guo, Yue Huang, Yujun Zhou, Haomin Zhuang, Tianyu Yang, Yao Su, Xiangliang Zhang,
- Abstract summary: UNCD (UNlearning evaluation via Cognitive Diagnosis) is a novel framework for fine-grained evaluation of LLM unlearning.<n>Our benchmark, UNCD-Cyber, provides a detailed assessment of the removal of dangerous capabilities.<n>Our dedicated benchmark, UNCD-Cyber, provides a detailed assessment of the removal of dangerous capabilities.
- Score: 34.62178125699054
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the widespread use of LLMs and the rising critical ethical and safety concerns, LLM unlearning methods have been developed to remove harmful knowledge and undesirable capabilities. In this context, evaluations are mostly based on single-value metrics such as QA accuracy. However, these metrics often fail to capture the nuanced retention of harmful knowledge components, making it difficult to assess the true effectiveness of unlearning. To address this issue, we propose UNCD (UNlearning evaluation via Cognitive Diagnosis), a novel framework that leverages Cognitive Diagnosis Modeling for fine-grained evaluation of LLM unlearning. Our dedicated benchmark, UNCD-Cyber, provides a detailed assessment of the removal of dangerous capabilities. Moreover, we introduce UNCD-Agent, which refines unlearning by diagnosing knowledge remnants and generating targeted unlearning data. Extensive experiments across eight unlearning methods and two base models demonstrate that UNCD not only enhances evaluation but also effectively facilitates the removal of harmful LLM abilities.
Related papers
- Do LLMs estimate uncertainty well in instruction-following? [9.081508933326644]
Large language models (LLMs) could be valuable personal AI agents across various domains, provided they can precisely follow user instructions.
We present the first systematic evaluation of the uncertainty estimation abilities of LLMs in the context of instruction-following.
Our findings show that existing uncertainty methods struggle, particularly when models make subtle errors in instruction following.
arXiv Detail & Related papers (2024-10-18T16:32:10Z) - Position: LLM Unlearning Benchmarks are Weak Measures of Progress [31.957968729934745]
We find that existing benchmarks provide an overly optimistic and potentially misleading view on the effectiveness of candidate unlearning methods.
We identify that existing benchmarks are particularly vulnerable to modifications that introduce even loose dependencies between the forget and retain information.
arXiv Detail & Related papers (2024-10-03T18:07:25Z) - Diagnosing and Remedying Knowledge Deficiencies in LLMs via Label-free Curricular Meaningful Learning [42.38865072597821]
Large Language Models (LLMs) are versatile and demonstrate impressive generalization ability.
They still exhibit reasoning mistakes, often stemming from knowledge deficiencies.
We propose a label-free curricular meaningful learning framework (LaMer) to diagnose and remedy the knowledge deficiencies of LLMs.
arXiv Detail & Related papers (2024-08-21T08:39:49Z) - Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models [52.03511469562013]
We introduce the Iterative Contrastive Unlearning (ICU) framework, which consists of three core components.
A Knowledge Unlearning Induction module targets specific knowledge for removal using an unlearning loss.
A Contrastive Learning Enhancement module preserves the model's expressive capabilities against the pure unlearning goal.
An Iterative Unlearning Refinement module dynamically adjusts the unlearning process through ongoing evaluation and updates.
arXiv Detail & Related papers (2024-07-25T07:09:35Z) - How Reliable are LLMs as Knowledge Bases? Re-thinking Facutality and Consistency [60.25969380388974]
Large Language Models (LLMs) are increasingly explored as knowledge bases (KBs)
Current evaluation methods focus too narrowly on knowledge retention, overlooking other crucial criteria for reliable performance.
We propose new criteria and metrics to quantify factuality and consistency, leading to a final reliability score.
arXiv Detail & Related papers (2024-07-18T15:20:18Z) - Unlearning with Control: Assessing Real-world Utility for Large Language Model Unlearning [97.2995389188179]
Recent research has begun to approach large language models (LLMs) unlearning via gradient ascent (GA)
Despite their simplicity and efficiency, we suggest that GA-based methods face the propensity towards excessive unlearning.
We propose several controlling methods that can regulate the extent of excessive unlearning.
arXiv Detail & Related papers (2024-06-13T14:41:00Z) - An Information Theoretic Evaluation Metric For Strong Unlearning [20.143627174765985]
We introduce the Information Difference Index (IDI), a novel white-box metric inspired by information theory.
IDI quantifies retained information in intermediate features by measuring mutual information between those features and the labels to be forgotten.
Our experiments demonstrate that IDI effectively measures the degree of unlearning across various datasets and architectures.
arXiv Detail & Related papers (2024-05-28T06:57:01Z) - Unveiling the Misuse Potential of Base Large Language Models via In-Context Learning [61.2224355547598]
Open-sourcing of large language models (LLMs) accelerates application development, innovation, and scientific progress.
Our investigation exposes a critical oversight in this belief.
By deploying carefully designed demonstrations, our research demonstrates that base LLMs could effectively interpret and execute malicious instructions.
arXiv Detail & Related papers (2024-04-16T13:22:54Z) - KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models [53.84677081899392]
KIEval is a Knowledge-grounded Interactive Evaluation framework for large language models.
It incorporates an LLM-powered "interactor" role for the first time to accomplish a dynamic contamination-resilient evaluation.
Extensive experiments on seven leading LLMs across five datasets validate KIEval's effectiveness and generalization.
arXiv Detail & Related papers (2024-02-23T01:30:39Z) - Rethinking Machine Unlearning for Large Language Models [85.92660644100582]
We explore machine unlearning in the domain of large language models (LLMs)<n>This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities.
arXiv Detail & Related papers (2024-02-13T20:51:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.