Ferrari: Federated Feature Unlearning via Optimizing Feature Sensitivity
- URL: http://arxiv.org/abs/2405.17462v2
- Date: Wed, 29 May 2024 17:11:04 GMT
- Title: Ferrari: Federated Feature Unlearning via Optimizing Feature Sensitivity
- Authors: Hanlin Gu, WinKent Ong, Chee Seng Chan, Lixin Fan,
- Abstract summary: The advent of Federated Learning (FL) highlights the practical necessity for the 'right to be forgotten' for all clients.
Feature unlearning has gained considerable attention due to its applications in unlearning sensitive features, backdoor features, and bias features.
Existing methods employ the influence function to achieve feature unlearning, which is impractical for FL as it necessitates the participation of other clients in the unlearning process.
We propose an effective federated feature unlearning framework called Ferrari, which minimizes feature sensitivity.
- Score: 16.800865928660954
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The advent of Federated Learning (FL) highlights the practical necessity for the 'right to be forgotten' for all clients, allowing them to request data deletion from the machine learning model's service provider. This necessity has spurred a growing demand for Federated Unlearning (FU). Feature unlearning has gained considerable attention due to its applications in unlearning sensitive features, backdoor features, and bias features. Existing methods employ the influence function to achieve feature unlearning, which is impractical for FL as it necessitates the participation of other clients in the unlearning process. Furthermore, current research lacks an evaluation of the effectiveness of feature unlearning. To address these limitations, we define feature sensitivity in the evaluation of feature unlearning according to Lipschitz continuity. This metric characterizes the rate of change or sensitivity of the model output to perturbations in the input feature. We then propose an effective federated feature unlearning framework called Ferrari, which minimizes feature sensitivity. Extensive experimental results and theoretical analysis demonstrate the effectiveness of Ferrari across various feature unlearning scenarios, including sensitive, backdoor, and biased features.
Related papers
- Efficient Machine Unlearning via Influence Approximation [75.31015485113993]
Influence-based unlearning has emerged as a prominent approach to estimate the impact of individual training samples on model parameters without retraining.<n>This paper establishes a theoretical link between memorizing (incremental learning) and forgetting (unlearning)<n>We introduce the Influence Approximation Unlearning algorithm for efficient machine unlearning from the incremental perspective.
arXiv Detail & Related papers (2025-07-31T05:34:27Z) - The Importance of Being Lazy: Scaling Limits of Continual Learning [60.97756735877614]
We show that increasing model width is only beneficial when it reduces the amount of feature learning, yielding more laziness.<n>We study the intricate relationship between feature learning, task non-stationarity, and forgetting, finding that high feature learning is only beneficial with highly similar tasks.
arXiv Detail & Related papers (2025-06-20T10:12:38Z) - UniErase: Unlearning Token as a Universal Erasure Primitive for Language Models [54.75551043657238]
We introduce UniErase, a novel unlearning paradigm that employs learnable parametric suffix (unlearning token) to steer language models toward targeted forgetting behaviors.<n>UniErase achieves state-of-the-art (SOTA) performance across batch, sequential, and precise unlearning under fictitious and real-world knowledge settings.
arXiv Detail & Related papers (2025-05-21T15:53:28Z) - AdaF^2M^2: Comprehensive Learning and Responsive Leveraging Features in Recommendation System [16.364341783911414]
We propose a model-agnostic framework AdaF2M2, short for Adaptive Feature Modeling with Feature Mask.
By arming base models with AdaF2M2, we conduct online A/B tests on multiple recommendation scenarios, obtaining +1.37% and +1.89% cumulative improvements on user active days and app duration respectively.
arXiv Detail & Related papers (2025-01-27T06:49:27Z) - FAIREDU: A Multiple Regression-Based Method for Enhancing Fairness in Machine Learning Models for Educational Applications [1.24497353837144]
This paper introduces FAIREDU, a novel and effective method designed to improve fairness across multiple sensitive features.
Through extensive experiments, we evaluate FAIREDU effectiveness in enhancing fairness without compromising model performance.
The results demonstrate that FAIREDU addresses intersectionality across features such as gender, race, age, and other sensitive features, outperforming state-of-the-art methods with minimal effect on model accuracy.
arXiv Detail & Related papers (2024-10-08T23:29:24Z) - Can Learned Optimization Make Reinforcement Learning Less Difficult? [70.5036361852812]
We consider whether learned optimization can help overcome reinforcement learning difficulties.
Our method, Learned Optimization for Plasticity, Exploration and Non-stationarity (OPEN), meta-learns an update rule whose input features and output structure are informed by previously proposed to these difficulties.
arXiv Detail & Related papers (2024-07-09T17:55:23Z) - Unveiling Hidden Factors: Explainable AI for Feature Boosting in Speech Emotion Recognition [17.568724398229232]
Speech emotion recognition (SER) has gained significant attention due to its several application fields, such as mental health, education, and human-computer interaction.
This study proposes an iterative feature boosting approach for SER that emphasizes feature relevance and explainability to enhance machine learning model performance.
The effectiveness of the proposed method is validated on the SER benchmarks of the Toronto emotional speech set (TESS), Berlin Database of Emotional Speech (EMO-DB), Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), and Surrey Audio-Visual Expressed Emotion (SAVEE) datasets.
arXiv Detail & Related papers (2024-06-01T00:39:55Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - When Do Curricula Work in Federated Learning? [56.88941905240137]
We find that curriculum learning largely alleviates non-IIDness.
The more disparate the data distributions across clients the more they benefit from learning.
We propose a novel client selection technique that benefits from the real-world disparity in the clients.
arXiv Detail & Related papers (2022-12-24T11:02:35Z) - Self-supervised remote sensing feature learning: Learning Paradigms,
Challenges, and Future Works [9.36487195178422]
This paper analyzes and compares three feature learning paradigms: unsupervised feature learning (USFL), supervised feature learning (SFL), and self-supervised feature learning (SSFL)
Under this unified framework, we analyze the advantages of SSFL over the other two learning paradigms in RSIs understanding tasks.
We analyze the effect of SSFL signals and pre-training data on the learned features to provide insights for improving the RSI feature learning.
arXiv Detail & Related papers (2022-11-15T13:32:22Z) - Offline Reinforcement Learning with Differentiable Function
Approximation is Provably Efficient [65.08966446962845]
offline reinforcement learning, which aims at optimizing decision-making strategies with historical data, has been extensively applied in real-life applications.
We take a step by considering offline reinforcement learning with differentiable function class approximation (DFA)
Most importantly, we show offline differentiable function approximation is provably efficient by analyzing the pessimistic fitted Q-learning algorithm.
arXiv Detail & Related papers (2022-10-03T07:59:42Z) - Continual Feature Selection: Spurious Features in Continual Learning [0.0]
This paper studies spurious features' influence on continual learning algorithms.
We show that learning algorithms solve tasks by overfitting features that are not generalizable.
arXiv Detail & Related papers (2022-03-02T10:43:54Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Adma: A Flexible Loss Function for Neural Networks [0.0]
We come up with the idea that instead of static plugins that the currently available loss functions are, they should by default be flexible in nature.
A flexible loss function can be a more insightful navigator for neural networks leading to higher convergence rates.
We introduce a novel flexible loss function for neural networks.
arXiv Detail & Related papers (2020-07-23T02:41:09Z) - Feature Selection Library (MATLAB Toolbox) [1.2058143465239939]
The Feature Selection Library (FSLib) introduces a comprehensive suite of feature selection (FS) algorithms.
FSLib addresses the curse of dimensionality, reduces computational load, and enhances model generalizability.
FSLib contributes to data interpretability by revealing important features, aiding in pattern recognition and understanding.
arXiv Detail & Related papers (2016-07-05T16:50:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.