Multi-Task and Transfer Learning for Federated Learning Applications
- URL: http://arxiv.org/abs/2207.08147v1
- Date: Sun, 17 Jul 2022 11:48:11 GMT
- Title: Multi-Task and Transfer Learning for Federated Learning Applications
- Authors: Cihat Ke\c{c}eci, Mohammad Shaqfeh, Hayat Mbayed, and Erchin Serpedin
- Abstract summary: Federated learning enables applications benefiting distributed and private datasets of a large number of potential data-holding clients.
We propose to train a deep neural network model with more generalized layers closer to the input and more personalized layers to the output.
We provide simulation results to highlight particular scenarios in which meta-learning-based federated learning proves to be useful.
- Score: 5.224306534441244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning enables many applications benefiting distributed and
private datasets of a large number of potential data-holding clients. However,
different clients usually have their own particular objectives in terms of the
tasks to be learned from the data. So, supporting federated learning with
meta-learning tools such as multi-task learning and transfer learning will help
enlarge the set of potential applications of federated learning by letting
clients of different but related tasks share task-agnostic models that can be
then further updated and tailored by each individual client for its particular
task. In a federated multi-task learning problem, the trained deep neural
network model should be fine-tuned for the respective objective of each client
while sharing some parameters for more generalizability. We propose to train a
deep neural network model with more generalized layers closer to the input and
more personalized layers to the output. We achieve that by introducing layer
types such as pre-trained, common, task-specific, and personal layers. We
provide simulation results to highlight particular scenarios in which
meta-learning-based federated learning proves to be useful.
Related papers
- Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - OmniVec: Learning robust representations with cross modal sharing [28.023214572340336]
We present an approach to learn multiple tasks, in multiple modalities, with a unified architecture.
The proposed network is composed of task specific encoders, a common trunk in the middle, followed by task specific prediction heads.
We train the network on all major modalities, e.g. visual, audio, text and 3D, and report results on $22$ diverse and challenging public benchmarks.
arXiv Detail & Related papers (2023-11-07T14:00:09Z) - YOLOR-Based Multi-Task Learning [12.5920336941241]
Multi-task learning (MTL) aims to learn multiple tasks using a single model and jointly improve all of them assuming generalization and shared semantics.
We propose building on You Only Learn One Representation (YOLOR), a network architecture specifically designed for multitasking.
We find that our method achieves competitive performance on all tasks while maintaining a low parameter count and without any pre-training.
arXiv Detail & Related papers (2023-09-29T01:42:21Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - FedClassAvg: Local Representation Learning for Personalized Federated
Learning on Heterogeneous Neural Networks [21.613436984547917]
We propose a novel personalized federated learning method called federated classifier averaging (FedClassAvg)
FedClassAvg aggregates weights as an agreement on decision boundaries on feature spaces.
We demonstrate it outperforms the current state-of-the-art algorithms on heterogeneous personalized federated learning tasks.
arXiv Detail & Related papers (2022-10-25T08:32:08Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - Learning Multi-Tasks with Inconsistent Labels by using Auxiliary Big
Task [24.618094251341958]
Multi-task learning is to improve the performance of the model by transferring and exploiting common knowledge among tasks.
We propose a framework to learn these tasks by jointly leveraging both abundant information from a learnt auxiliary big task with sufficiently many classes to cover those of all these tasks.
Our experimental results demonstrate its effectiveness in comparison with the state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-07T02:46:47Z) - Federated Few-Shot Learning with Adversarial Learning [30.905239262227]
We propose a few-shot learning framework to learn a few-shot classification model that can classify unseen data classes with only a few labeled samples.
We show our approaches outperform baselines by more than 10% in learning vision tasks and 5% in language tasks.
arXiv Detail & Related papers (2021-04-01T09:44:57Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Adversarial Continual Learning [99.56738010842301]
We propose a hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features.
Our model combines architecture growth to prevent forgetting of task-specific skills and an experience replay approach to preserve shared skills.
arXiv Detail & Related papers (2020-03-21T02:08:17Z) - Federated Continual Learning with Weighted Inter-client Transfer [79.93004004545736]
We propose a novel federated continual learning framework, Federated Weighted Inter-client Transfer (FedWeIT)
FedWeIT decomposes the network weights into global federated parameters and sparse task-specific parameters, and each client receives selective knowledge from other clients.
We validate our FedWeIT against existing federated learning and continual learning methods, and our model significantly outperforms them with a large reduction in the communication cost.
arXiv Detail & Related papers (2020-03-06T13:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.