Selective, Controlled and Domain-Agnostic Unlearning in Pretrained CLIP: A Training- and Data-Free Approach
- URL: http://arxiv.org/abs/2512.14113v1
- Date: Tue, 16 Dec 2025 05:54:13 GMT
- Title: Selective, Controlled and Domain-Agnostic Unlearning in Pretrained CLIP: A Training- and Data-Free Approach
- Authors: Ashish Mishra, Gyanaranjan Nayak, Tarun Kumar, Arpit Shah, Suparna Bhattacharya, Martin Foltin,
- Abstract summary: Real-world applications often demand the removal (or "unlearning") of specific object classes without requiring additional data or retraining.<n>We propose a novel training- and data-free unlearning framework that enables three distinct forgetting paradigms.<n>By leveraging a multimodal nullspace through synergistic integration of text prompts and synthesized visual prototypes, our method efficiently removes undesired class information while preserving the remaining knowledge.
- Score: 4.820351122363815
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Pretrained models like CLIP have demonstrated impressive zero-shot classification capabilities across diverse visual domains, spanning natural images, artistic renderings, and abstract representations. However, real-world applications often demand the removal (or "unlearning") of specific object classes without requiring additional data or retraining, or affecting the model's performance on unrelated tasks. In this paper, we propose a novel training- and data-free unlearning framework that enables three distinct forgetting paradigms: (1) global unlearning of selected objects across all domains, (2) domain-specific knowledge removal (e.g., eliminating sketch representations while preserving photo recognition), and (3) complete unlearning in selective domains. By leveraging a multimodal nullspace through synergistic integration of text prompts and synthesized visual prototypes derived from CLIP's joint embedding space, our method efficiently removes undesired class information while preserving the remaining knowledge. This approach overcomes the limitations of existing retraining-based methods and offers a flexible and computationally efficient solution for controlled model forgetting.
Related papers
- Erasing CLIP Memories: Non-Destructive, Data-Free Zero-Shot class Unlearning in CLIP Models [4.820351122363815]
We introduce a novel, closed-form approach for selective unlearning in multimodal models.<n>Our method leverages nullspace projection to erase the target class information embedded in the final projection layer.<n>Our experiments demonstrate that even a partial projection can balance between complete unlearning and retaining useful information.
arXiv Detail & Related papers (2025-12-16T06:37:41Z) - AUVIC: Adversarial Unlearning of Visual Concepts for Multi-modal Large Language Models [63.05306474002547]
Regulatory frameworks mandating the 'right to be forgotten' drive the need for machine unlearning.<n>We introduce AUVIC, a novel visual concept unlearning framework for MLLMs.<n>We show that AUVIC achieves state-of-the-art target forgetting rates while incurs minimal performance degradation on non-target concepts.
arXiv Detail & Related papers (2025-11-14T13:35:32Z) - Federated Graph Unlearning [23.00839112398916]
The demand for data privacy has led to the development of frameworks like Federated Graph Learning.<n>The proposed framework employs a bifurcated strategy tailored to the specific unlearning request.<n>The framework achieves substantial improvements in model prediction accuracy across both client and meta-unlearning scenarios.
arXiv Detail & Related papers (2025-08-04T14:57:03Z) - Targeted Forgetting of Image Subgroups in CLIP Models [30.78624907082701]
Foundation models (FMs) such as CLIP have demonstrated impressive zero-shot performance across various tasks.<n>They often inherit harmful or unwanted knowledge from noisy internet-sourced datasets.<n>Existing model unlearning methods either rely on access to pre-trained datasets or focus on coarse-grained unlearning.<n>We propose a novel three-stage approach that progressively unlearns targeted knowledge while mitigating over-forgetting.
arXiv Detail & Related papers (2025-06-03T17:50:03Z) - GRAIL: Gradient-Based Adaptive Unlearning for Privacy and Copyright in LLMs [26.13653211674955]
Large Language Models (LLMs) trained on extensive datasets often learn sensitive information.<n>Retraining entire models from scratch to remove undesired information is both costly and impractical.<n>We propose GRAIL (GRadient-based AdaptIve unLearning), a novel multi-domain unlearning framework.
arXiv Detail & Related papers (2025-04-17T06:16:32Z) - Prompting Forgetting: Unlearning in GANs via Textual Guidance [4.3562145620596215]
We propose Text-to-Unlearn, a novel framework that selectively unlearns concepts from pre-trained GANs using only text prompts.<n>Our approach guides the unlearning process without requiring additional datasets or supervised fine-tuning.<n>To our knowledge, Text-to-Unlearn is the first cross-modal unlearning framework for GANs.
arXiv Detail & Related papers (2025-04-01T22:18:40Z) - CLIPErase: Efficient Unlearning of Visual-Textual Associations in CLIP [57.49519639951552]
We introduce CLIPErase, a novel approach that disentangles and selectively forgets both visual and textual associations.<n>Experiments on the CIFAR-100 and Flickr30K datasets demonstrate that CLIPErase effectively forgets designated associations in zero-shot tasks for multimodal samples.
arXiv Detail & Related papers (2024-10-30T17:51:31Z) - Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models [52.40798352740857]
We introduce the Iterative Contrastive Unlearning (ICU) framework, which consists of three core components.<n>A Knowledge Unlearning Induction module targets specific knowledge for removal using an unlearning loss.<n>A Contrastive Learning Enhancement module preserves the model's expressive capabilities against the pure unlearning goal.<n>An Iterative Unlearning Refinement module dynamically adjusts the unlearning process through ongoing evaluation and updates.
arXiv Detail & Related papers (2024-07-25T07:09:35Z) - Learning Customized Visual Models with Retrieval-Augmented Knowledge [104.05456849611895]
We propose REACT, a framework to acquire the relevant web knowledge to build customized visual models for target domains.
We retrieve the most relevant image-text pairs from the web-scale database as external knowledge, and propose to customize the model by only training new modualized blocks while freezing all the original weights.
The effectiveness of REACT is demonstrated via extensive experiments on classification, retrieval, detection and segmentation tasks, including zero, few, and full-shot settings.
arXiv Detail & Related papers (2023-01-17T18:59:06Z) - A Curriculum-style Self-training Approach for Source-Free Semantic Segmentation [91.13472029666312]
We propose a curriculum-style self-training approach for source-free domain adaptive semantic segmentation.
Our method yields state-of-the-art performance on source-free semantic segmentation tasks for both synthetic-to-real and adverse conditions.
arXiv Detail & Related papers (2021-06-22T10:21:39Z) - CLASTER: Clustering with Reinforcement Learning for Zero-Shot Action
Recognition [52.66360172784038]
We propose a clustering-based model, which considers all training samples at once, instead of optimizing for each instance individually.
We call the proposed method CLASTER and observe that it consistently improves over the state-of-the-art in all standard datasets.
arXiv Detail & Related papers (2021-01-18T12:46:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.