Communication-Efficient and Privacy-Preserving Feature-based Federated
Transfer Learning
- URL: http://arxiv.org/abs/2209.05395v1
- Date: Mon, 12 Sep 2022 16:48:52 GMT
- Title: Communication-Efficient and Privacy-Preserving Feature-based Federated
Transfer Learning
- Authors: Feng Wang, M. Cenk Gursoy and Senem Velipasalar
- Abstract summary: Federated learning has attracted growing interest as it preserves the clients' privacy.
Due to the limited radio spectrum, the communication efficiency of federated learning via wireless links is critical.
We propose a feature-based federated transfer learning as an innovative approach to reduce the uplink payload by more than five orders of magnitude.
- Score: 11.758703301702012
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning has attracted growing interest as it preserves the
clients' privacy. As a variant of federated learning, federated transfer
learning utilizes the knowledge from similar tasks and thus has also been
intensively studied. However, due to the limited radio spectrum, the
communication efficiency of federated learning via wireless links is critical
since some tasks may require thousands of Terabytes of uplink payload. In order
to improve the communication efficiency, we in this paper propose the
feature-based federated transfer learning as an innovative approach to reduce
the uplink payload by more than five orders of magnitude compared to that of
existing approaches. We first introduce the system design in which the
extracted features and outputs are uploaded instead of parameter updates, and
then determine the required payload with this approach and provide comparisons
with the existing approaches. Subsequently, we analyze the random shuffling
scheme that preserves the clients' privacy. Finally, we evaluate the
performance of the proposed learning scheme via experiments on an image
classification task to show its effectiveness.
Related papers
- Feature-based Federated Transfer Learning: Communication Efficiency, Robustness and Privacy [11.308544280789016]
We propose feature-based federated transfer learning as a novel approach to improve communication efficiency.
Specifically, in the proposed feature-based federated learning, we design the extracted features and outputs to be uploaded instead of parameter updates.
We evaluate the performance of the proposed learning scheme via experiments on an image classification task and a natural language processing task to demonstrate its effectiveness.
arXiv Detail & Related papers (2024-05-15T00:43:19Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - FedClassAvg: Local Representation Learning for Personalized Federated
Learning on Heterogeneous Neural Networks [21.613436984547917]
We propose a novel personalized federated learning method called federated classifier averaging (FedClassAvg)
FedClassAvg aggregates weights as an agreement on decision boundaries on feature spaces.
We demonstrate it outperforms the current state-of-the-art algorithms on heterogeneous personalized federated learning tasks.
arXiv Detail & Related papers (2022-10-25T08:32:08Z) - SURF: Semi-supervised Reward Learning with Data Augmentation for
Feedback-efficient Preference-based Reinforcement Learning [168.89470249446023]
We present SURF, a semi-supervised reward learning framework that utilizes a large amount of unlabeled samples with data augmentation.
In order to leverage unlabeled samples for reward learning, we infer pseudo-labels of the unlabeled samples based on the confidence of the preference predictor.
Our experiments demonstrate that our approach significantly improves the feedback-efficiency of the preference-based method on a variety of locomotion and robotic manipulation tasks.
arXiv Detail & Related papers (2022-03-18T16:50:38Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - ProtoDA: Efficient Transfer Learning for Few-Shot Intent Classification [21.933876113300897]
We adopt an alternative approach by transfer learning on an ensemble of related tasks using prototypical networks under the meta-learning paradigm.
Using intent classification as a case study, we demonstrate that increasing variability in training tasks can significantly improve classification performance.
arXiv Detail & Related papers (2021-01-28T00:19:13Z) - CosSGD: Nonlinear Quantization for Communication-efficient Federated
Learning [62.65937719264881]
Federated learning facilitates learning across clients without transferring local data on these clients to a central server.
We propose a nonlinear quantization for compressed gradient descent, which can be easily utilized in federated learning.
Our system significantly reduces the communication cost by up to three orders of magnitude, while maintaining convergence and accuracy of the training process.
arXiv Detail & Related papers (2020-12-15T12:20:28Z) - Uniform Priors for Data-Efficient Transfer [65.086680950871]
We show that features that are most transferable have high uniformity in the embedding space.
We evaluate the regularization on its ability to facilitate adaptation to unseen tasks and data.
arXiv Detail & Related papers (2020-06-30T04:39:36Z) - Transfer Heterogeneous Knowledge Among Peer-to-Peer Teammates: A Model
Distillation Approach [55.83558520598304]
We propose a brand new solution to reuse experiences and transfer value functions among multiple students via model distillation.
We also describe how to design an efficient communication protocol to exploit heterogeneous knowledge.
Our proposed framework, namely Learning and Teaching Categorical Reinforcement, shows promising performance on stabilizing and accelerating learning progress.
arXiv Detail & Related papers (2020-02-06T11:31:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.