Understanding Machine Unlearning Through the Lens of Mode Connectivity
- URL: http://arxiv.org/abs/2504.06407v1
- Date: Tue, 08 Apr 2025 20:02:10 GMT
- Title: Understanding Machine Unlearning Through the Lens of Mode Connectivity
- Authors: Jiali Cheng, Hadi Amiri,
- Abstract summary: We study mode connectivity in unlearning across a range of overlooked conditions.<n>Our findings show distinct patterns of fluctuation of different evaluation metrics along the curve.<n>This is the first study on mode connectivity in the context of machine unlearning.
- Score: 14.755831733659699
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Machine Unlearning aims to remove undesired information from trained models without requiring full retraining from scratch. Despite recent advancements, their underlying loss landscapes and optimization dynamics received less attention. In this paper, we investigate and analyze machine unlearning through the lens of mode connectivity - the phenomenon where independently trained models can be connected by smooth low-loss paths in the parameter space. We define and study mode connectivity in unlearning across a range of overlooked conditions, including connections between different unlearning methods, models trained with and without curriculum learning, and models optimized with first-order and secondorder techniques. Our findings show distinct patterns of fluctuation of different evaluation metrics along the curve, as well as the mechanistic (dis)similarity between unlearning methods. To the best of our knowledge, this is the first study on mode connectivity in the context of machine unlearning.
Related papers
- Disentangled World Models: Learning to Transfer Semantic Knowledge from Distracting Videos for Reinforcement Learning [93.58897637077001]
This paper tries to learn and understand underlying semantic variations from distracting videos via offline-to-online latent distillation and flexible disentanglement constraints.<n>We pretrain the action-free video prediction model offline with disentanglement regularization to extract semantic knowledge from distracting videos.<n>For finetuning in the online environment, we exploit the knowledge from the pretrained model and introduce a disentanglement constraint to the world model.
arXiv Detail & Related papers (2025-03-11T13:50:22Z) - Adjoint-based online learning of two-layer quasi-geostrophic baroclinic turbulence [1.0985060632689176]
An increasingly popular approach is to leverage machine learning approaches for parameterizations, regressing for a map between the resolved state and missing feedbacks in a fluid system as a supervised learning task.
Here, we explore the online' approach that involves the fluid dynamical model during the training stage for the learning of baroclinic turbulence and its parameterization.
Two online approaches are considered: a full adjoint-based online approach, related to traditional adjoint optimization approaches that require a differentiable' dynamical model, and an approximately online approach that approximates the adjoint calculation and does not require a different
arXiv Detail & Related papers (2024-11-21T13:15:01Z) - Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities [89.40778301238642]
Model merging is an efficient empowerment technique in the machine learning community.
There is a significant gap in the literature regarding a systematic and thorough review of these techniques.
arXiv Detail & Related papers (2024-08-14T16:58:48Z) - Machine Unlearning in Contrastive Learning [3.6218162133579694]
We introduce a novel gradient constraint-based approach for training the model to effectively achieve machine unlearning.
Our approach demonstrates proficient performance not only on contrastive learning models but also on supervised learning models.
arXiv Detail & Related papers (2024-05-12T16:09:01Z) - An Information Theoretic Approach to Machine Unlearning [43.423418819707784]
To comply with AI and data regulations, the need to forget private or copyrighted information from trained machine learning models is increasingly important.<n>In this work, we address the zero-shot unlearning scenario, whereby an unlearning algorithm must be able to remove data given only a trained model and the data to be forgotten.<n>We derive a simple but principled zero-shot unlearning method based on the geometry of the model.
arXiv Detail & Related papers (2024-02-02T13:33:30Z) - Fantastic Gains and Where to Find Them: On the Existence and Prospect of
General Knowledge Transfer between Any Pretrained Model [74.62272538148245]
We show that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other.
We investigate if it is possible to transfer such "complementary" knowledge from one model to another without performance degradation.
arXiv Detail & Related papers (2023-10-26T17:59:46Z) - Class-wise Federated Unlearning: Harnessing Active Forgetting with Teacher-Student Memory Generation [11.638683787598817]
We propose a neuro-inspired federated unlearning framework based on active forgetting.<n>Our framework distinguishes itself from existing methods by utilizing new memories to overwrite old ones.<n>Our method achieves satisfactory unlearning completeness against backdoor attacks.
arXiv Detail & Related papers (2023-07-07T03:07:26Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - On the Necessity of Auditable Algorithmic Definitions for Machine
Unlearning [13.149070833843133]
Machine unlearning, i.e. having a model forget about some of its training data, has become increasingly important as privacy legislation promotes variants of the right-to-be-forgotten.
We first show that the definition that underlies approximate unlearning, which seeks to prove the approximately unlearned model is close to an exactly retrained model, is incorrect because one can obtain the same model using different datasets.
We then turn to exact unlearning approaches and ask how to verify their claims of unlearning.
arXiv Detail & Related papers (2021-10-22T16:16:56Z) - Learning by Distillation: A Self-Supervised Learning Framework for
Optical Flow Estimation [71.76008290101214]
DistillFlow is a knowledge distillation approach to learning optical flow.
It achieves state-of-the-art unsupervised learning performance on both KITTI and Sintel datasets.
Our models ranked 1st among all monocular methods on the KITTI 2015 benchmark, and outperform all published methods on the Sintel Final benchmark.
arXiv Detail & Related papers (2021-06-08T09:13:34Z) - Improving Robot Dual-System Motor Learning with Intrinsically Motivated
Meta-Control and Latent-Space Experience Imagination [17.356402088852423]
We present a novel dual-system motor learning approach where a meta-controller arbitrates online between model-based and model-free decisions.
We evaluate our approach against baseline and state-of-the-art methods on learning vision-based robotic grasping in simulation and real world.
arXiv Detail & Related papers (2020-04-19T12:14:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.