Multi-Tier Federated Learning for Vertically Partitioned Data
- URL: http://arxiv.org/abs/2102.03620v1
- Date: Sat, 6 Feb 2021 17:34:14 GMT
- Title: Multi-Tier Federated Learning for Vertically Partitioned Data
- Authors: Anirban Das and Stacy Patterson
- Abstract summary: We consider decentralized model training in tiered datasets networks.
Our network model consists of a set of silos, each holding a vertical partition of the data.
- Score: 2.817412580574242
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider decentralized model training in tiered communication networks.
Our network model consists of a set of silos, each holding a vertical partition
of the data. Each silo contains a hub and a set of clients, with the silo's
vertical data shard partitioned horizontally across its clients. We propose
Tiered Decentralized Coordinate Descent (TDCD), a communication-efficient
decentralized training algorithm for such two-tiered networks. To reduce
communication overhead, the clients in each silo perform multiple local
gradient steps before sharing updates with their hub. Each hub adjusts its
coordinates by averaging its workers' updates, and then hubs exchange
intermediate updates with one another. We present a theoretical analysis of our
algorithm and show the dependence of the convergence rate on the number of
vertical partitions, the number of local updates, and the number of clients in
each hub. We further validate our approach empirically via simulation-based
experiments using a variety of datasets and both convex and non-convex
objectives.
Related papers
- Decentralized Sporadic Federated Learning: A Unified Algorithmic Framework with Convergence Guarantees [18.24213566328972]
Decentralized decentralized learning (DFL) captures FL settings where both (i) model updates and (ii) model aggregations are carried out by the clients without a central server.
DSpodFL consistently achieves speeds compared with baselines under various system settings.
arXiv Detail & Related papers (2024-02-05T19:02:19Z) - A Multi-Token Coordinate Descent Method for Semi-Decentralized Vertical
Federated Learning [24.60603310894048]
Communication efficiency is a major challenge in learning (FL)
We propose Multi-Token Coordinate Descent (MTCD)
MTCD is a tunable communication-efficient for semi-decentralized vertical federation setups.
arXiv Detail & Related papers (2023-09-18T17:59:01Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Over-The-Air Federated Learning under Byzantine Attacks [43.67333971183711]
Federated learning (FL) is a promising solution to enable many AI applications.
FL allows the clients to participate in the training phase, governed by a central server, without sharing their local data.
One of the main challenges of FL is the communication overhead.
We propose a transmission and aggregation framework to reduce the effect of such attacks.
arXiv Detail & Related papers (2022-05-05T22:09:21Z) - FedDC: Federated Learning with Non-IID Data via Local Drift Decoupling
and Correction [48.85303253333453]
Federated learning (FL) allows multiple clients to collectively train a high-performance global model without sharing their private data.
We propose a novel federated learning algorithm with local drift decoupling and correction (FedDC)
Our FedDC only introduces lightweight modifications in the local training phase, in which each client utilizes an auxiliary local drift variable to track the gap between the local model parameter and the global model parameters.
Experiment results and analysis demonstrate that FedDC yields expediting convergence and better performance on various image classification tasks.
arXiv Detail & Related papers (2022-03-22T14:06:26Z) - Cross-Silo Federated Learning for Multi-Tier Networks with Vertical and Horizontal Data Partitioning [20.410494162333972]
Tiered Decentralized Coordinate Descent (TDCD) is a decentralized training algorithm for two-tiered communication networks.
We present a theoretical analysis of our algorithm and show the dependence of the convergence rate on the number of vertical partitions and the number of local updates.
arXiv Detail & Related papers (2021-08-19T22:01:04Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - Decentralized Federated Learning via Mutual Knowledge Transfer [37.5341683644709]
Decentralized federated learning (DFL) is a problem in the Internet of things (IoT) systems.
We propose a mutual knowledge transfer (Def-KT) algorithm where local clients fuse models by transferring their learnt knowledge to each other.
Our experiments on the MNIST, Fashion-MNIST, and CIFAR10 datasets reveal datasets that the proposed Def-KT algorithm significantly outperforms the baseline DFL methods.
arXiv Detail & Related papers (2020-12-24T01:43:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.