Rotation Control Unlearning: Quantifying and Controlling Continuous Unlearning for LLM with The Cognitive Rotation Space
- URL: http://arxiv.org/abs/2509.25743v1
- Date: Tue, 30 Sep 2025 03:59:29 GMT
- Title: Rotation Control Unlearning: Quantifying and Controlling Continuous Unlearning for LLM with The Cognitive Rotation Space
- Authors: Xiang Zhang, Kun Wei, Xu Yang, Chenghao Xu, Su Yan, Cheng Deng,
- Abstract summary: We propose a novel method, called Rotation Control Unlearning (RCU), to quantify and control the unlearning degree in the continuous unlearning process.<n>The skew symmetric loss is designed to construct the existence of the cognitive rotation space, where the changes of rotational angle can simulate the continuous unlearning process.<n> Experiments on multiple datasets confirm that our method without retained dataset achieves SOTA performance.
- Score: 66.51378598755933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As Large Language Models (LLMs) become increasingly prevalent, their security vulnerabilities have already drawn attention. Machine unlearning is introduced to seek to mitigate these risks by removing the influence of undesirable data. However, existing methods not only rely on the retained dataset to preserve model utility, but also suffer from cumulative catastrophic utility loss under continuous unlearning requests. To solve this dilemma, we propose a novel method, called Rotation Control Unlearning (RCU), which leverages the rotational salience weight of RCU to quantify and control the unlearning degree in the continuous unlearning process. The skew symmetric loss is designed to construct the existence of the cognitive rotation space, where the changes of rotational angle can simulate the continuous unlearning process. Furthermore, we design an orthogonal rotation axes regularization to enforce mutually perpendicular rotation directions for continuous unlearning requests, effectively minimizing interference and addressing cumulative catastrophic utility loss. Experiments on multiple datasets confirm that our method without retained dataset achieves SOTA performance.
Related papers
- Registration is a Powerful Rotation-Invariance Learner for 3D Anomaly Detection [64.0168648353038]
3D anomaly detection in point-cloud data is critical for industrial quality control, aiming to identify structural defects with high reliability.<n>Current memory bank-based methods often suffer from inconsistent feature transformations and limited discriminative capacity.<n>We propose a registration-induced, rotation-invariant feature extraction framework that integrates the objectives of point-cloud registration and memory-based anomaly detection.
arXiv Detail & Related papers (2025-10-19T14:56:38Z) - Data-Driven Exploration for a Class of Continuous-Time Indefinite Linear--Quadratic Reinforcement Learning Problems [6.859965454961918]
We study reinforcement learning for continuous-time linear-quadratic (LQ) control problems.<n>We propose a model-free, data-driven exploration mechanism that adaptively adjusts entropy regularization by the critic.<n>Our method achieves a sublinear regret bound that matches the best-known model-free results for this class of LQ problems.
arXiv Detail & Related papers (2025-07-01T01:09:06Z) - Go Beyond Your Means: Unlearning with Per-Sample Gradient Orthogonalization [43.436621884831276]
Machine unlearning aims to remove the influence of problematic training data after a model has been trained.<n>Many existing machine unlearning methods address this challenge by carefully balancing gradient ascent on the unlearn data with the gradient descent on a retain set representing the training data.<n>Here, we propose OrthoGrad, a novel approach that mitigates interference between the unlearn set and the retain set rather than competing ascent and descent processes.
arXiv Detail & Related papers (2025-03-04T06:14:33Z) - Machine Unlearning via Null Space Calibration [23.668928015009087]
We introduce machine underlineUnlearning via underlineNull underlineSpace underlineCalibration (UNSC), which can unlearn target samples without over-unlearning.
Our approach hinges on confining the unlearning process to a specified null space tailored to the remaining samples.
arXiv Detail & Related papers (2024-04-21T09:09:21Z) - UNDIAL: Self-Distillation with Adjusted Logits for Robust Unlearning in Large Language Models [12.45822383965784]
We introduce UnDIAL (Unlearning via Self-Distillation on Adjusted Logits), a novel and robust unlearning method.
Our approach leverages self-distillation to adjust logits and selectively reduce the influence of targeted tokens.
arXiv Detail & Related papers (2024-02-15T16:21:14Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Action-Quantized Offline Reinforcement Learning for Robotic Skill
Learning [68.16998247593209]
offline reinforcement learning (RL) paradigm provides recipe to convert static behavior datasets into policies that can perform better than the policy that collected the data.
In this paper, we propose an adaptive scheme for action quantization.
We show that several state-of-the-art offline RL methods such as IQL, CQL, and BRAC improve in performance on benchmarks when combined with our proposed discretization scheme.
arXiv Detail & Related papers (2023-10-18T06:07:10Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Unsupervised MR Motion Artifact Deep Learning using Outlier-Rejecting
Bootstrap Aggregation [37.41561581618164]
We propose a novel unsupervised deep learning scheme through outlier-rejecting bootstrap subsampling and aggregation.
Our method does not require any paired data because the training step only requires artifact-free images.
We verify that our method can be applied for artifact correction from simulated motion as well as real motion from TSM successfully.
arXiv Detail & Related papers (2020-11-12T12:10:58Z) - SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks [81.64530401885476]
We propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties.
Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns.
We evaluate our method's performances on two large-scale datasets, i.e., KITTI and Apollo-SouthBay.
arXiv Detail & Related papers (2020-10-19T09:23:39Z) - A Kernel-Based Approach to Non-Stationary Reinforcement Learning in
Metric Spaces [53.47210316424326]
KeRNS is an algorithm for episodic reinforcement learning in non-stationary Markov Decision Processes.
We prove a regret bound that scales with the covering dimension of the state-action space and the total variation of the MDP with time.
arXiv Detail & Related papers (2020-07-09T21:37:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.