When unlearning is free: leveraging low influence points to reduce computational costs
- URL: http://arxiv.org/abs/2512.05254v1
- Date: Thu, 04 Dec 2025 21:10:31 GMT
- Title: When unlearning is free: leveraging low influence points to reduce computational costs
- Authors: Anat Kleiman, Robert Fisher, Ben Deaner, Udi Wieder,
- Abstract summary: We ask whether points that have a negligible impact on the model's learning need to be removed.<n>We propose an efficient unlearning framework that reduces the size of datasets before unlearning.
- Score: 1.2844524343936794
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As concerns around data privacy in machine learning grow, the ability to unlearn, or remove, specific data points from trained models becomes increasingly important. While state of the art unlearning methods have emerged in response, they typically treat all points in the forget set equally. In this work, we challenge this approach by asking whether points that have a negligible impact on the model's learning need to be removed. Through a comparative analysis of influence functions across language and vision tasks, we identify subsets of training data with negligible impact on model outputs. Leveraging this insight, we propose an efficient unlearning framework that reduces the size of datasets before unlearning leading to significant computational savings (up to approximately 50 percent) on real world empirical examples.
Related papers
- Z0-Inf: Zeroth Order Approximation for Data Influence [47.682602051124235]
We introduce a highly efficient zeroth-order approximation for estimating the influence of training data.<n>Our approach achieves superior accuracy in estimating self-influence and comparable or improved accuracy in estimating train-test influence for fine-tuned large language models.
arXiv Detail & Related papers (2025-10-13T18:30:37Z) - Causal Fuzzing for Verifying Machine Unlearning [9.923981046985771]
CAF'E is a new framework that unifies datapoint- and feature-level unlearning for verification of black-box ML models.<n>Our evaluation shows that CAF'E successfully detects residual influence missed by baselines while maintaining computational efficiency.
arXiv Detail & Related papers (2025-09-20T04:19:37Z) - Efficient Machine Unlearning via Influence Approximation [75.31015485113993]
Influence-based unlearning has emerged as a prominent approach to estimate the impact of individual training samples on model parameters without retraining.<n>This paper establishes a theoretical link between memorizing (incremental learning) and forgetting (unlearning)<n>We introduce the Influence Approximation Unlearning algorithm for efficient machine unlearning from the incremental perspective.
arXiv Detail & Related papers (2025-07-31T05:34:27Z) - When to Forget? Complexity Trade-offs in Machine Unlearning [23.507879460531264]
Machine Unlearning (MU) aims at removing the influence of specific data points from a trained model.<n>We analyze the efficiency of unlearning methods and establish the first upper and lower bounds on minimax times for this problem.<n>We provide a phase diagram for the unlearning complexity ratio -- a novel metric that compares the computational cost of the best unlearning method to full model retraining.
arXiv Detail & Related papers (2025-02-24T16:56:27Z) - Unlearning in- vs. out-of-distribution data in LLMs under gradient-based method [31.268301764230525]
This work formalizes a metric to evaluate unlearning quality in generative models.
We use it to assess the trade-offs between unlearning quality and performance.
We further evaluate how example's memorization and difficulty affect unlearning under a classical gradient ascent-based approach.
arXiv Detail & Related papers (2024-11-07T03:02:09Z) - RESTOR: Knowledge Recovery in Machine Unlearning [71.75834077528305]
Large language models trained on web-scale corpora can contain private or sensitive information.<n>Several machine unlearning algorithms have been proposed to eliminate the effect of such datapoints.<n>We propose the RESTOR framework for machine unlearning evaluation.
arXiv Detail & Related papers (2024-10-31T20:54:35Z) - Scaling Laws for the Value of Individual Data Points in Machine Learning [55.596413470429475]
We introduce a new perspective by investigating scaling behavior for the value of individual data points.
We provide learning theory to support our scaling law, and we observe empirically that it holds across diverse model classes.
Our work represents a first step towards understanding and utilizing scaling properties for the value of individual data points.
arXiv Detail & Related papers (2024-05-30T20:10:24Z) - An Information Theoretic Approach to Machine Unlearning [43.423418819707784]
To comply with AI and data regulations, the need to forget private or copyrighted information from trained machine learning models is increasingly important.<n>In this work, we address the zero-shot unlearning scenario, whereby an unlearning algorithm must be able to remove data given only a trained model and the data to be forgotten.<n>We derive a simple but principled zero-shot unlearning method based on the geometry of the model.
arXiv Detail & Related papers (2024-02-02T13:33:30Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z) - Mind Your Outliers! Investigating the Negative Impact of Outliers on
Active Learning for Visual Question Answering [71.15403434929915]
We show that across 5 models and 4 datasets on the task of visual question answering, a wide variety of active learning approaches fail to outperform random selection.
We identify the problem as collective outliers -- groups of examples that active learning methods prefer to acquire but models fail to learn.
We show that active learning sample efficiency increases significantly as the number of collective outliers in the active learning pool decreases.
arXiv Detail & Related papers (2021-07-06T00:52:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.