FjORD: Fair and Accurate Federated Learning under heterogeneous targets
with Ordered Dropout
- URL: http://arxiv.org/abs/2102.13451v2
- Date: Mon, 1 Mar 2021 09:16:03 GMT
- Title: FjORD: Fair and Accurate Federated Learning under heterogeneous targets
with Ordered Dropout
- Authors: Samuel Horvath, Stefanos Laskaridis, Mario Almeida, Ilias Leontiadis,
Stylianos I. Venieris and Nicholas D. Lane
- Abstract summary: We introduce Ordered Dropout, a mechanism that achieves an ordered, nested representation of knowledge in Neural Networks.
We employ this technique, along with a self-distillation methodology, in the realm of Federated Learning in a framework called FjORD.
FjORD consistently leads to significant performance gains over state-of-the-art baselines, while maintaining its nested structure.
- Score: 16.250862114257277
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) has been gaining significant traction across
different ML tasks, ranging from vision to keyboard predictions. In large-scale
deployments, client heterogeneity is a fact, and constitutes a primary problem
for fairness, training performance and accuracy. Although significant efforts
have been made into tackling statistical data heterogeneity, the diversity in
the processing capabilities and network bandwidth of clients, termed as system
heterogeneity, has remained largely unexplored. Current solutions either
disregard a large portion of available devices or set a uniform limit on the
model's capacity, restricted by the least capable participants. In this work,
we introduce Ordered Dropout, a mechanism that achieves an ordered, nested
representation of knowledge in Neural Networks and enables the extraction of
lower footprint submodels without the need of retraining. We further show that
for linear maps our Ordered Dropout is equivalent to SVD. We employ this
technique, along with a self-distillation methodology, in the realm of FL in a
framework called FjORD. FjORD alleviates the problem of client system
heterogeneity by tailoring the model width to the client's capabilities.
Extensive evaluation on both CNNs and RNNs across diverse modalities shows that
FjORD consistently leads to significant performance gains over state-of-the-art
baselines, while maintaining its nested structure.
Related papers
- FedPref: Federated Learning Across Heterogeneous Multi-objective Preferences [2.519319150166215]
Federated Learning (FL) is a distributed machine learning strategy developed for settings where training data is owned by distributed devices and cannot be shared.
The application of FL to real-world settings brings additional challenges associated with heterogeneity between participants.
We propose FedPref, a first algorithm designed to facilitate personalised FL in this setting.
arXiv Detail & Related papers (2025-01-23T12:12:59Z) - Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.
We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - Over-the-Air Fair Federated Learning via Multi-Objective Optimization [52.295563400314094]
We propose an over-the-air fair federated learning algorithm (OTA-FFL) to train fair FL models.
Experiments demonstrate the superiority of OTA-FFL in achieving fairness and robust performance.
arXiv Detail & Related papers (2025-01-06T21:16:51Z) - FedHPL: Efficient Heterogeneous Federated Learning with Prompt Tuning and Logit Distillation [32.305134875959226]
Federated learning (FL) is a privacy-preserving paradigm that enables distributed clients to collaboratively train models with a central server.
We propose FedHPL, a parameter-efficient unified $textbfFed$erated learning framework for $textbfH$eterogeneous settings.
We show that our framework outperforms state-of-the-art FL approaches, with less overhead and training rounds.
arXiv Detail & Related papers (2024-05-27T15:25:32Z) - Fed-QSSL: A Framework for Personalized Federated Learning under Bitwidth
and Data Heterogeneity [14.313847382199059]
Federated quantization-based self-supervised learning scheme (Fed-QSSL) designed to address heterogeneity in FL systems.
Fed-QSSL deploys de-quantization, weighted aggregation and re-quantization, ultimately creating models personalized to both data distribution and specific infrastructure of each client's device.
arXiv Detail & Related papers (2023-12-20T19:11:19Z) - One-Shot Federated Learning with Classifier-Guided Diffusion Models [44.604485649167216]
One-shot federated learning (OSFL) has gained attention in recent years due to its low communication cost.
In this paper, we explore the novel opportunities that diffusion models bring to OSFL and propose FedCADO.
FedCADO generates data that complies with clients' distributions and subsequently training the aggregated model on the server.
arXiv Detail & Related papers (2023-11-15T11:11:25Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - FedIN: Federated Intermediate Layers Learning for Model Heterogeneity [7.781409257429762]
Federated learning (FL) facilitates edge devices to cooperatively train a global shared model while maintaining the training data locally and privately.
In this study, we propose an FL method called Federated Intermediate Layers Learning (FedIN), supporting heterogeneous models without relying on any public dataset.
Experiment results demonstrate the superior performance of FedIN in heterogeneous model environments compared to state-of-the-art algorithms.
arXiv Detail & Related papers (2023-04-03T07:20:43Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Feature Quantization Improves GAN Training [126.02828112121874]
Feature Quantization (FQ) for the discriminator embeds both true and fake data samples into a shared discrete space.
Our method can be easily plugged into existing GAN models, with little computational overhead in training.
arXiv Detail & Related papers (2020-04-05T04:06:50Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.