Efficient Language Model Architectures for Differentially Private
Federated Learning
- URL: http://arxiv.org/abs/2403.08100v1
- Date: Tue, 12 Mar 2024 22:21:48 GMT
- Title: Efficient Language Model Architectures for Differentially Private
Federated Learning
- Authors: Jae Hun Ro, Srinadh Bhojanapalli, Zheng Xu, Yanxiang Zhang, Ananda
Theertha Suresh
- Abstract summary: Cross-device federated learning (FL) is a technique that trains a model on data distributed across typically millions of edge devices without data leaving the devices.
In centralized training of language models, adaptives are preferred as they offer improved stability and performance.
We propose a scale-in Coupled Input Forget Gate (SI CIFG) recurrent network by modifying the sigmoid and tanh activations in neural recurrent cell.
- Score: 21.280600854272716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-device federated learning (FL) is a technique that trains a model on
data distributed across typically millions of edge devices without data leaving
the devices. SGD is the standard client optimizer for on device training in
cross-device FL, favored for its memory and computational efficiency. However,
in centralized training of neural language models, adaptive optimizers are
preferred as they offer improved stability and performance. In light of this,
we ask if language models can be modified such that they can be efficiently
trained with SGD client optimizers and answer this affirmatively.
We propose a scale-invariant Coupled Input Forget Gate (SI CIFG) recurrent
network by modifying the sigmoid and tanh activations in the recurrent cell and
show that this new model converges faster and achieves better utility than the
standard CIFG recurrent model in cross-device FL in large scale experiments. We
further show that the proposed scale invariant modification also helps in
federated learning of larger transformer models. Finally, we demonstrate the
scale invariant modification is also compatible with other non-adaptive
algorithms. Particularly, our results suggest an improved privacy utility
trade-off in federated learning with differential privacy.
Related papers
- Heterogeneous Federated Learning with Splited Language Model [22.65325348176366]
Federated Split Learning (FSL) is a promising distributed learning paradigm in practice.
In this paper, we harness Pre-trained Image Transformers (PITs) as the initial model, coined FedV, to accelerate the training process and improve model robustness.
We are the first to provide a systematic evaluation of FSL methods with PITs in real-world datasets, different partial device participations, and heterogeneous data splits.
arXiv Detail & Related papers (2024-03-24T07:33:08Z) - OnDev-LCT: On-Device Lightweight Convolutional Transformers towards
federated learning [29.798780069556074]
Federated learning (FL) has emerged as a promising approach to collaboratively train machine learning models across multiple edge devices.
We propose OnDev-LCT: Lightweight Convolutional Transformers for On-Device vision tasks with limited training data and resources.
arXiv Detail & Related papers (2024-01-22T02:17:36Z) - AdapterFL: Adaptive Heterogeneous Federated Learning for
Resource-constrained Mobile Computing Systems [24.013937378054074]
Federated Learning (FL) enables collaborative learning of large-scale distributed clients without data sharing.
Mobile computing systems can only use small low-performance models for collaborative learning.
We use a model reassemble strategy to facilitate collaborative training of massive heterogeneous mobile devices adaptively.
arXiv Detail & Related papers (2023-11-23T14:42:43Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - AQUILA: Communication Efficient Federated Learning with Adaptive
Quantization in Device Selection Strategy [27.443439653087662]
This paper introduces AQUILA (adaptive quantization in device selection strategy), a novel adaptive framework devised to handle these issues.
AQUILA integrates a sophisticated device selection method that prioritizes the quality and usefulness of device updates.
Our experiments demonstrate that AQUILA significantly decreases communication costs compared to existing methods.
arXiv Detail & Related papers (2023-08-01T03:41:47Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Stochastic Coded Federated Learning: Theoretical Analysis and Incentive
Mechanism Design [18.675244280002428]
We propose a novel FL framework named coded federated learning (SCFL) that leverages coded computing techniques.
In SCFL, each edge device uploads a privacy-preserving coded dataset to the server, which is generated by adding noise to the projected local dataset.
We show that SCFL learns a better model within the given time and achieves a better privacy-performance tradeoff than the baseline methods.
arXiv Detail & Related papers (2022-11-08T09:58:36Z) - Factorized Neural Transducer for Efficient Language Model Adaptation [51.81097243306204]
We propose a novel model, factorized neural Transducer, by factorizing the blank and vocabulary prediction.
It is expected that this factorization can transfer the improvement of the standalone language model to the Transducer for speech recognition.
We demonstrate that the proposed factorized neural Transducer yields 15% to 20% WER improvements when out-of-domain text data is used for language model adaptation.
arXiv Detail & Related papers (2021-09-27T15:04:00Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.