Ferrari: Federated Feature Unlearning via Optimizing Feature Sensitivity
- URL: http://arxiv.org/abs/2405.17462v2
- Date: Wed, 29 May 2024 17:11:04 GMT
- Title: Ferrari: Federated Feature Unlearning via Optimizing Feature Sensitivity
- Authors: Hanlin Gu, WinKent Ong, Chee Seng Chan, Lixin Fan,
- Abstract summary: The advent of Federated Learning (FL) highlights the practical necessity for the 'right to be forgotten' for all clients.
Feature unlearning has gained considerable attention due to its applications in unlearning sensitive features, backdoor features, and bias features.
Existing methods employ the influence function to achieve feature unlearning, which is impractical for FL as it necessitates the participation of other clients in the unlearning process.
We propose an effective federated feature unlearning framework called Ferrari, which minimizes feature sensitivity.
- Score: 16.800865928660954
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The advent of Federated Learning (FL) highlights the practical necessity for the 'right to be forgotten' for all clients, allowing them to request data deletion from the machine learning model's service provider. This necessity has spurred a growing demand for Federated Unlearning (FU). Feature unlearning has gained considerable attention due to its applications in unlearning sensitive features, backdoor features, and bias features. Existing methods employ the influence function to achieve feature unlearning, which is impractical for FL as it necessitates the participation of other clients in the unlearning process. Furthermore, current research lacks an evaluation of the effectiveness of feature unlearning. To address these limitations, we define feature sensitivity in the evaluation of feature unlearning according to Lipschitz continuity. This metric characterizes the rate of change or sensitivity of the model output to perturbations in the input feature. We then propose an effective federated feature unlearning framework called Ferrari, which minimizes feature sensitivity. Extensive experimental results and theoretical analysis demonstrate the effectiveness of Ferrari across various feature unlearning scenarios, including sensitive, backdoor, and biased features.
Related papers
- Unveiling Hidden Factors: Explainable AI for Feature Boosting in Speech Emotion Recognition [17.568724398229232]
Speech emotion recognition (SER) has gained significant attention due to its several application fields, such as mental health, education, and human-computer interaction.
This study proposes an iterative feature boosting approach for SER that emphasizes feature relevance and explainability to enhance machine learning model performance.
The effectiveness of the proposed method is validated on the SER benchmarks of the Toronto emotional speech set (TESS), Berlin Database of Emotional Speech (EMO-DB), Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), and Surrey Audio-Visual Expressed Emotion (SAVEE) datasets.
arXiv Detail & Related papers (2024-06-01T00:39:55Z) - Vlearn: Off-Policy Learning with Efficient State-Value Function Estimation [22.129001951441015]
Existing off-policy reinforcement learning algorithms often rely on an explicit state-action-value function representation.
This reliance results in data inefficiency as maintaining a state-action-value function in high-dimensional action spaces is challenging.
We present an efficient approach that utilizes only a state-value function as the critic for off-policy deep reinforcement learning.
arXiv Detail & Related papers (2024-03-07T12:45:51Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Reliable Federated Disentangling Network for Non-IID Domain Feature [62.73267904147804]
In this paper, we propose a novel reliable federated disentangling network, termed RFedDis.
To the best of our knowledge, our proposed RFedDis is the first work to develop an FL approach based on evidential uncertainty combined with feature disentangling.
Our proposed RFedDis provides outstanding performance with a high degree of reliability as compared to other state-of-the-art FL approaches.
arXiv Detail & Related papers (2023-01-30T11:46:34Z) - Self-supervised remote sensing feature learning: Learning Paradigms,
Challenges, and Future Works [9.36487195178422]
This paper analyzes and compares three feature learning paradigms: unsupervised feature learning (USFL), supervised feature learning (SFL), and self-supervised feature learning (SSFL)
Under this unified framework, we analyze the advantages of SSFL over the other two learning paradigms in RSIs understanding tasks.
We analyze the effect of SSFL signals and pre-training data on the learned features to provide insights for improving the RSI feature learning.
arXiv Detail & Related papers (2022-11-15T13:32:22Z) - Offline Reinforcement Learning with Differentiable Function
Approximation is Provably Efficient [65.08966446962845]
offline reinforcement learning, which aims at optimizing decision-making strategies with historical data, has been extensively applied in real-life applications.
We take a step by considering offline reinforcement learning with differentiable function class approximation (DFA)
Most importantly, we show offline differentiable function approximation is provably efficient by analyzing the pessimistic fitted Q-learning algorithm.
arXiv Detail & Related papers (2022-10-03T07:59:42Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Adma: A Flexible Loss Function for Neural Networks [0.0]
We come up with the idea that instead of static plugins that the currently available loss functions are, they should by default be flexible in nature.
A flexible loss function can be a more insightful navigator for neural networks leading to higher convergence rates.
We introduce a novel flexible loss function for neural networks.
arXiv Detail & Related papers (2020-07-23T02:41:09Z) - Feature Selection Library (MATLAB Toolbox) [1.2058143465239939]
The Feature Selection Library (FSLib) introduces a comprehensive suite of feature selection (FS) algorithms.
FSLib addresses the curse of dimensionality, reduces computational load, and enhances model generalizability.
FSLib contributes to data interpretability by revealing important features, aiding in pattern recognition and understanding.
arXiv Detail & Related papers (2016-07-05T16:50:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.