LUNE: Efficient LLM Unlearning via LoRA Fine-Tuning with Negative Examples
- URL: http://arxiv.org/abs/2512.07375v1
- Date: Mon, 08 Dec 2025 10:10:29 GMT
- Title: LUNE: Efficient LLM Unlearning via LoRA Fine-Tuning with Negative Examples
- Authors: Yezi Liu, Hanning Chen, Wenjun Huang, Yang Ni, Mohsen Imani,
- Abstract summary: Large language models (LLMs) possess vast knowledge acquired from extensive training corpora.<n>Traditional model unlearning approaches require computationally expensive fine-tuning or direct weight editing.<n>LoRA-based Unlearning with Negative Examples (LUNE) performs negative-only unlearning by updating only low-rank adapters.
- Score: 17.898277374771254
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) possess vast knowledge acquired from extensive training corpora, but they often cannot remove specific pieces of information when needed, which makes it hard to handle privacy, bias mitigation, and knowledge correction. Traditional model unlearning approaches require computationally expensive fine-tuning or direct weight editing, making them impractical for real-world deployment. In this work, we introduce LoRA-based Unlearning with Negative Examples (LUNE), a lightweight framework that performs negative-only unlearning by updating only low-rank adapters while freezing the backbone, thereby localizing edits and avoiding disruptive global changes. Leveraging Low-Rank Adaptation (LoRA), LUNE targets intermediate representations to suppress (or replace) requested knowledge with an order-of-magnitude lower compute and memory than full fine-tuning or direct weight editing. Extensive experiments on multiple factual unlearning tasks show that LUNE: (I) achieves effectiveness comparable to full fine-tuning and memory-editing methods, and (II) reduces computational cost by about an order of magnitude.
Related papers
- NanoNet: Parameter-Efficient Learning with Label-Scarce Supervision for Lightweight Text Mining Model [51.055122269052696]
NanoNet is a novel framework for lightweight text mining that implements parameter-efficient learning with limited supervision.<n>The entire process leverages parameter-efficient learning, reducing training costs and minimizing supervision requirements, ultimately yielding a lightweight model for downstream inference.
arXiv Detail & Related papers (2026-02-05T08:31:57Z) - Decomposing and Composing: Towards Efficient Vision-Language Continual Learning via Rank-1 Expert Pool in a Single LoRA [50.97792275353563]
We introduce a novel framework that restructures a single Low-Rank Adaptation (LoRA) module as a decomposable Rank-1 Expert Pool.<n>Our method learns to dynamically compose a sparse, task-specific update by selecting from this expert pool, guided by the semantics of the [Guided] token.
arXiv Detail & Related papers (2026-01-30T10:54:51Z) - RapidUn: Influence-Driven Parameter Reweighting for Efficient Large Language Model Unlearning [5.265976319881303]
We introduce RapidUn, an influence-driven and parameter-efficient unlearning framework.<n>It first estimates per-sample influence through a fast estimation module, then maps these scores into adaptive update weights.<n>On Mistral-7B and Llama-3-8B across Dolly-15k and Alpaca-57k, RapidUn achieves up to 100 times higher efficiency than full retraining.
arXiv Detail & Related papers (2025-12-04T05:00:52Z) - UniErase: Towards Balanced and Precise Unlearning in Language Models [69.04923022755547]
Large language models (LLMs) require iterative updates to address the outdated information problem.<n>UniErase is a novel unlearning framework that demonstrates precision and balanced performances between knowledge unlearning and ability retaining.
arXiv Detail & Related papers (2025-05-21T15:53:28Z) - How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM? [55.33467849079774]
Low-rank adaptation (LoRA) is a popular and efficient training technique for updating or domain-specific adaptation of Large Language Models.<n>We investigate how new facts can be incorporated into the LLM using LoRA without compromising the previously learned knowledge.
arXiv Detail & Related papers (2025-02-20T12:31:03Z) - Towards Robust and Parameter-Efficient Knowledge Unlearning for LLMs [25.91643745340183]
Large Language Models (LLMs) have demonstrated strong reasoning and memorization capabilities via pretraining on massive textual corpora.<n>This poses risk of privacy and copyright violations, highlighting the need for efficient machine unlearning methods.<n>We propose Low-rank Knowledge Unlearning (LoKU), a novel framework that enables robust and efficient unlearning for LLMs.
arXiv Detail & Related papers (2024-08-13T04:18:32Z) - Offset Unlearning for Large Language Models [49.851093293780615]
delta-Unlearning is an offset unlearning framework for black-box LLMs.<n>We show that delta-Unlearning can effectively unlearn target data while maintaining similar or even stronger performance on general out-of-forget-scope tasks.
arXiv Detail & Related papers (2024-04-17T03:39:51Z) - PILLOW: Enhancing Efficient Instruction Fine-tuning via Prompt Matching [20.607323649079845]
Low-Rank Adaptation (LoRA) has become a promising alternative to instruction fine-tuning.
PILLOW aims to improve LoRA's performance by a discrimination-based LLM ability.
PILLOW exhibits commensurate performance on various evaluation metrics compared with typical instruction fine-tuning methods.
arXiv Detail & Related papers (2023-12-09T17:38:39Z) - Unlearn What You Want to Forget: Efficient Unlearning for LLMs [92.51670143929056]
Large language models (LLMs) have achieved significant progress from pre-training on and memorizing a wide range of textual data.
This process might suffer from privacy issues and violations of data protection regulations.
We propose an efficient unlearning framework that could efficiently update LLMs without having to retrain the whole model after data removals.
arXiv Detail & Related papers (2023-10-31T03:35:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.