Private Cross-Silo Federated Learning for Extracting Vaccine Adverse
Event Mentions
- URL: http://arxiv.org/abs/2103.07491v1
- Date: Fri, 12 Mar 2021 19:20:33 GMT
- Title: Private Cross-Silo Federated Learning for Extracting Vaccine Adverse
Event Mentions
- Authors: Pallika Kanani, Virendra J. Marathe, Daniel Peterson, Rave Harpaz,
Steve Bright
- Abstract summary: Federated Learning (FL) is a goto distributed training paradigm for users to jointly train a global model without physically sharing their data.
We present a comprehensive empirical analysis of various dimensions of benefits gained with FL based training.
We show that local DP can severely cripple the global model's prediction accuracy, thus dis-incentivizing users from participating in the federation.
- Score: 0.7349727826230862
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is quickly becoming a goto distributed training
paradigm for users to jointly train a global model without physically sharing
their data. Users can indirectly contribute to, and directly benefit from a
much larger aggregate data corpus used to train the global model. However,
literature on successful application of FL in real-world problem settings is
somewhat sparse. In this paper, we describe our experience applying a FL based
solution to the Named Entity Recognition (NER) task for an adverse event
detection application in the context of mass scale vaccination programs. We
present a comprehensive empirical analysis of various dimensions of benefits
gained with FL based training. Furthermore, we investigate effects of tighter
Differential Privacy (DP) constraints in highly sensitive settings where
federation users must enforce Local DP to ensure strict privacy guarantees. We
show that local DP can severely cripple the global model's prediction accuracy,
thus dis-incentivizing users from participating in the federation. In response,
we demonstrate how recent innovation on personalization methods can help
significantly recover the lost accuracy. We focus our analysis on the Federated
Fine-Tuning algorithm, FedFT, and prove that it is not PAC Identifiable, thus
making it even more attractive for FL-based training.
Related papers
- Privacy-Preserving Federated Learning via Dataset Distillation [9.60829979241686]
Federated Learning (FL) allows users to share knowledge instead of raw data to train a model with high accuracy.
During the training, users lose control over the knowledge shared, which causes serious data privacy issues.
This work proposes FLiP, which aims to bring the principle of least privilege (PoLP) to FL training.
arXiv Detail & Related papers (2024-10-25T13:20:40Z) - FedQUIT: On-Device Federated Unlearning via a Quasi-Competent Virtual Teacher [4.291269657919828]
Federated Learning (FL) promises better privacy guarantees for individuals' data when machine learning models are collaboratively trained.
When an FL participant exercises its right to be forgotten, i.e., to detach from the FL framework it has participated, the FL solution should perform all the necessary steps.
We propose FedQUIT, a novel algorithm that uses knowledge distillation to scrub the contribution of the forgetting data from an FL global model.
arXiv Detail & Related papers (2024-08-14T14:36:28Z) - Federated Unlearning for Human Activity Recognition [11.287645073129108]
We propose a lightweight machine unlearning method for refining the FL HAR model by selectively removing a portion of a client's training data.
Our method achieves unlearning accuracy comparable to textitretraining methods, resulting in speedups ranging from hundreds to thousands.
arXiv Detail & Related papers (2024-01-17T15:51:36Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Sparse Federated Learning with Hierarchical Personalized Models [24.763028713043468]
Federated learning (FL) can achieve privacy-safe and reliable collaborative training without collecting users' private data.
We propose a personalized FL algorithm using a hierarchical proximal mapping based on the moreau envelop, named sparse federated learning with hierarchical personalized models (sFedHP)
A continuously differentiable approximated L1-norm is also used as the sparse constraint to reduce the communication cost.
arXiv Detail & Related papers (2022-03-25T09:06:42Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.