Replica Tree-based Federated Learning using Limited Data
- URL: http://arxiv.org/abs/2312.17159v1
- Date: Thu, 28 Dec 2023 17:47:25 GMT
- Title: Replica Tree-based Federated Learning using Limited Data
- Authors: Ramona Ghilea and Islem Rekik
- Abstract summary: In this work, we propose a novel federated learning framework, named RepTreeFL.
At the core of the solution is the concept of a replica, where we replicate each participating client by copying its model architecture and perturbing its local data distribution.
Our approach enables learning from limited data and a small number of clients by aggregating a larger number of models with diverse data distributions.
- Score: 6.572149681197959
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Learning from limited data has been extensively studied in machine learning,
considering that deep neural networks achieve optimal performance when trained
using a large amount of samples. Although various strategies have been proposed
for centralized training, the topic of federated learning with small datasets
remains largely unexplored. Moreover, in realistic scenarios, such as settings
where medical institutions are involved, the number of participating clients is
also constrained. In this work, we propose a novel federated learning
framework, named RepTreeFL. At the core of the solution is the concept of a
replica, where we replicate each participating client by copying its model
architecture and perturbing its local data distribution. Our approach enables
learning from limited data and a small number of clients by aggregating a
larger number of models with diverse data distributions. Furthermore, we
leverage the hierarchical structure of the client network (both original and
virtual), alongside the model diversity across replicas, and introduce a
diversity-based tree aggregation, where replicas are combined in a tree-like
manner and the aggregation weights are dynamically updated based on the model
discrepancy. We evaluated our method on two tasks and two types of data, graph
generation and image classification (binary and multi-class), with both
homogeneous and heterogeneous model architectures. Experimental results
demonstrate the effectiveness and outperformance of RepTreeFL in settings where
both data and clients are limited. Our code is available at
https://github.com/basiralab/RepTreeFL.
Related papers
- FedBone: Towards Large-Scale Federated Multi-Task Learning [13.835972363413884]
In real-world applications, visual and natural language tasks typically require large-scale models to extract high-level abstract features.
Existing HFML methods disregard the impact of gradient conflicts on multi-task optimization.
We propose an innovative framework called FedBone, which enables the construction of large-scale models with better generalization.
arXiv Detail & Related papers (2023-06-30T08:19:38Z) - Prototype Helps Federated Learning: Towards Faster Convergence [38.517903009319994]
Federated learning (FL) is a distributed machine learning technique in which multiple clients cooperate to train a shared model without exchanging their raw data.
In this paper, a prototype-based federated learning framework is proposed, which can achieve better inference performance with only a few changes to the last global iteration of the typical federated learning process.
arXiv Detail & Related papers (2023-03-22T04:06:29Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - Federated Learning of Neural ODE Models with Different Iteration Counts [0.9444784653236158]
Federated learning is a distributed machine learning approach in which clients train models locally with their own data and upload them to a server so that their trained results are shared between them without uploading raw data to the server.
In this paper, we utilize Neural ODE based models for federated learning.
We show that our approach can reduce communication size by up to 92.4% compared with a baseline ResNet model using CIFAR-10 dataset.
arXiv Detail & Related papers (2022-08-19T17:57:32Z) - Architecture Agnostic Federated Learning for Neural Networks [19.813602191888837]
This work introduces a novel Federated Heterogeneous Neural Networks (FedHeNN) framework.
FedHeNN allows each client to build a personalised model without enforcing a common architecture across clients.
The key idea of FedHeNN is to use the instance-level representations obtained from peer clients to guide the simultaneous training on each client.
arXiv Detail & Related papers (2022-02-15T22:16:06Z) - HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain
Language Model Compression [53.90578309960526]
Large pre-trained language models (PLMs) have shown overwhelming performances compared with traditional neural network methods.
We propose a hierarchical relational knowledge distillation (HRKD) method to capture both hierarchical and domain relational information.
arXiv Detail & Related papers (2021-10-16T11:23:02Z) - Unsupervised Domain Adaptive Learning via Synthetic Data for Person
Re-identification [101.1886788396803]
Person re-identification (re-ID) has gained more and more attention due to its widespread applications in video surveillance.
Unfortunately, the mainstream deep learning methods still need a large quantity of labeled data to train models.
In this paper, we develop a data collector to automatically generate synthetic re-ID samples in a computer game, and construct a data labeler to simultaneously annotate them.
arXiv Detail & Related papers (2021-09-12T15:51:41Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - Performance Optimization for Federated Person Re-identification via
Benchmark Analysis [25.9422385039648]
Federated learning is a privacy-preserving machine learning technique that learns a shared model across decentralized clients.
In this work, we implement federated learning to person re-identification (FedReID) and optimize its performance in the real-world scenario.
arXiv Detail & Related papers (2020-08-26T13:41:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.