Feature-based Federated Transfer Learning: Communication Efficiency, Robustness and Privacy
- URL: http://arxiv.org/abs/2405.09014v1
- Date: Wed, 15 May 2024 00:43:19 GMT
- Title: Feature-based Federated Transfer Learning: Communication Efficiency, Robustness and Privacy
- Authors: Feng Wang, M. Cenk Gursoy, Senem Velipasalar,
- Abstract summary: We propose feature-based federated transfer learning as a novel approach to improve communication efficiency.
Specifically, in the proposed feature-based federated learning, we design the extracted features and outputs to be uploaded instead of parameter updates.
We evaluate the performance of the proposed learning scheme via experiments on an image classification task and a natural language processing task to demonstrate its effectiveness.
- Score: 11.308544280789016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose feature-based federated transfer learning as a novel approach to improve communication efficiency by reducing the uplink payload by multiple orders of magnitude compared to that of existing approaches in federated learning and federated transfer learning. Specifically, in the proposed feature-based federated learning, we design the extracted features and outputs to be uploaded instead of parameter updates. For this distributed learning model, we determine the required payload and provide comparisons with the existing schemes. Subsequently, we analyze the robustness of feature-based federated transfer learning against packet loss, data insufficiency, and quantization. Finally, we address privacy considerations by defining and analyzing label privacy leakage and feature privacy leakage, and investigating mitigating approaches. For all aforementioned analyses, we evaluate the performance of the proposed learning scheme via experiments on an image classification task and a natural language processing task to demonstrate its effectiveness.
Related papers
- Covariate-Elaborated Robust Partial Information Transfer with Conditional Spike-and-Slab Prior [1.111488407653005]
We propose a novel Bayesian transfer learning method named CONCERT'' to allow robust partial information transfer.
A conditional spike-and-slab prior is introduced in the joint distribution of target and source parameters for information transfer.
In contrast to existing work, the CONCERT is a one-step procedure, which achieves variable selection and information transfer simultaneously.
arXiv Detail & Related papers (2024-03-30T07:32:58Z) - UNIDEAL: Curriculum Knowledge Distillation Federated Learning [17.817181326740698]
Federated Learning (FL) has emerged as a promising approach to enable collaborative learning among multiple clients.
In this paper, we present UNI, a novel FL algorithm specifically designed to tackle the challenges of cross-domain scenarios.
Our results demonstrate that UNI achieves superior performance in terms of both model accuracy and communication efficiency.
arXiv Detail & Related papers (2023-09-16T11:30:29Z) - An Exploration of Data Efficiency in Intra-Dataset Task Transfer for
Dialog Understanding [65.75873687351553]
This study explores the effects of varying quantities of target task training data on sequential transfer learning in the dialog domain.
Unintuitively, our data shows that often target task training data size has minimal effect on how sequential transfer learning performs compared to the same model without transfer learning.
arXiv Detail & Related papers (2022-10-21T04:36:46Z) - Communication-Efficient and Privacy-Preserving Feature-based Federated
Transfer Learning [11.758703301702012]
Federated learning has attracted growing interest as it preserves the clients' privacy.
Due to the limited radio spectrum, the communication efficiency of federated learning via wireless links is critical.
We propose a feature-based federated transfer learning as an innovative approach to reduce the uplink payload by more than five orders of magnitude.
arXiv Detail & Related papers (2022-09-12T16:48:52Z) - SURF: Semi-supervised Reward Learning with Data Augmentation for
Feedback-efficient Preference-based Reinforcement Learning [168.89470249446023]
We present SURF, a semi-supervised reward learning framework that utilizes a large amount of unlabeled samples with data augmentation.
In order to leverage unlabeled samples for reward learning, we infer pseudo-labels of the unlabeled samples based on the confidence of the preference predictor.
Our experiments demonstrate that our approach significantly improves the feedback-efficiency of the preference-based method on a variety of locomotion and robotic manipulation tasks.
arXiv Detail & Related papers (2022-03-18T16:50:38Z) - Pattern Transfer Learning for Reinforcement Learning in Order
Dispatching [12.747361275395011]
We propose a pattern transfer learning framework for value-based reinforcement learning in the order dispatch problem.
The superior performance of the proposed method is supported by experiments.
arXiv Detail & Related papers (2021-05-27T15:08:34Z) - CosSGD: Nonlinear Quantization for Communication-efficient Federated
Learning [62.65937719264881]
Federated learning facilitates learning across clients without transferring local data on these clients to a central server.
We propose a nonlinear quantization for compressed gradient descent, which can be easily utilized in federated learning.
Our system significantly reduces the communication cost by up to three orders of magnitude, while maintaining convergence and accuracy of the training process.
arXiv Detail & Related papers (2020-12-15T12:20:28Z) - A Theoretical Perspective on Differentially Private Federated Multi-task
Learning [12.935153199667987]
collaborative learning models need to be developed with respect to both privacy and utility concerns.
We propose a new federated multi-task for effective parameter transfer differential privacy to protect at the client level.
We are the first to provide both privacy utility guarantees for such a proposed algorithm.
arXiv Detail & Related papers (2020-11-14T00:53:16Z) - On Learning Text Style Transfer with Direct Rewards [101.97136885111037]
Lack of parallel corpora makes it impossible to directly train supervised models for the text style transfer task.
We leverage semantic similarity metrics originally used for fine-tuning neural machine translation models.
Our model provides significant gains in both automatic and human evaluation over strong baselines.
arXiv Detail & Related papers (2020-10-24T04:30:02Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z) - Task-Feature Collaborative Learning with Application to Personalized
Attribute Prediction [166.87111665908333]
We propose a novel multi-task learning method called Task-Feature Collaborative Learning (TFCL)
Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks.
As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks.
arXiv Detail & Related papers (2020-04-29T02:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.