AdaptCL: Efficient Collaborative Learning with Dynamic and Adaptive
Pruning
- URL: http://arxiv.org/abs/2106.14126v1
- Date: Sun, 27 Jun 2021 02:41:19 GMT
- Title: AdaptCL: Efficient Collaborative Learning with Dynamic and Adaptive
Pruning
- Authors: Guangmeng Zhou, Ke Xu, Qi Li, Yang Liu, Yi Zhao
- Abstract summary: We propose a novel and efficient collaborative learning framework named AdaptCL.
All workers (data holders) achieve approximately identical update time as the fastest worker by equipping them with capability-adapted pruned models.
AdaptCL achieves time savings of more than 41% on average and improves accuracy in a low heterogeneous environment.
- Score: 16.785573286753742
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In multi-party collaborative learning, the parameter server sends a global
model to each data holder for local training and then aggregates committed
models globally to achieve privacy protection. However, both the dragger issue
of synchronous collaborative learning and the staleness issue of asynchronous
collaborative learning make collaborative learning inefficient in real-world
heterogeneous environments. We propose a novel and efficient collaborative
learning framework named AdaptCL, which generates an adaptive sub-model
dynamically from the global base model for each data holder, without any prior
information about worker capability. All workers (data holders) achieve
approximately identical update time as the fastest worker by equipping them
with capability-adapted pruned models. Thus the training process can be
dramatically accelerated. Besides, we tailor the efficient pruned rate learning
algorithm and pruning approach for AdaptCL. Meanwhile, AdaptCL provides a
mechanism for handling the trade-off between accuracy and time overhead and can
be combined with other techniques to accelerate training further. Empirical
results show that AdaptCL introduces little computing and communication
overhead. AdaptCL achieves time savings of more than 41\% on average and
improves accuracy in a low heterogeneous environment. In a highly heterogeneous
environment, AdaptCL achieves a training speedup of 6.2x with a slight loss of
accuracy.
Related papers
- Dynamic Allocation Hypernetwork with Adaptive Model Recalibration for Federated Continual Learning [49.508844889242425]
We propose a novel server-side FCL pattern in medical domain, Dynamic Allocation Hypernetwork with adaptive model recalibration (FedDAH)
FedDAH is designed to facilitate collaborative learning under the distinct and dynamic task streams across clients.
For the biased optimization, we introduce a novel adaptive model recalibration (AMR) to incorporate the candidate changes of historical models into current server updates.
arXiv Detail & Related papers (2025-03-25T00:17:47Z) - Dynamic Allocation Hypernetwork with Adaptive Model Recalibration for FCL [49.508844889242425]
We propose a novel server-side FCL pattern in medical domain, Dynamic Allocation Hypernetwork with adaptive model recalibration (textbfFedDAH)
For the biased optimization, we introduce a novel adaptive model recalibration (AMR) to incorporate the candidate changes of historical models into current server updates.
Experiments on the AMOS dataset demonstrate the superiority of our FedDAH to other FCL methods on sites with different task streams.
arXiv Detail & Related papers (2025-03-23T13:12:56Z) - CoPEFT: Fast Adaptation Framework for Multi-Agent Collaborative Perception with Parameter-Efficient Fine-Tuning [9.161215048625172]
Training a robust collaborative perception model requires collecting sufficient training data that covers all possible collaboration scenarios.
Existing methods, such as domain adaptation, have mitigated this issue by exposing the deployment data during the training stage but incur a high training cost.
We propose a lightweight framework, CoPEFT, for adapting a trained collaborative perception model to new deployment environments under low-cost conditions.
arXiv Detail & Related papers (2025-02-15T07:33:33Z) - Collaborative and Efficient Personalization with Mixtures of Adaptors [5.195669033269619]
We propose a parameter-efficient framework to tackle multi-task learning problems.
We call our framework Federated Low-Rank Adaptive Learning (FLoRAL)
We show promising experimental results on synthetic datasets and real-world federated multi-task problems.
arXiv Detail & Related papers (2024-10-04T15:11:15Z) - Adaptive Adapter Routing for Long-Tailed Class-Incremental Learning [55.384428765798496]
New data exhibits a long-tailed distribution, such as e-commerce platform reviews.
This necessitates continuous model learning imbalanced data without forgetting.
We introduce AdaPtive Adapter RouTing (APART) as an exemplar-free solution for LTCIL.
arXiv Detail & Related papers (2024-09-11T17:52:00Z) - Efficient Federated Learning Using Dynamic Update and Adaptive Pruning with Momentum on Shared Server Data [59.6985168241067]
Federated Learning (FL) encounters two important problems, i.e., low training efficiency and limited computational resources.
We propose a new FL framework, FedDUMAP, to leverage the shared insensitive data on the server and the distributed data in edge devices.
Our proposed FL model, FedDUMAP, combines the three original techniques and has a significantly better performance compared with baseline approaches.
arXiv Detail & Related papers (2024-08-11T02:59:11Z) - FedAST: Federated Asynchronous Simultaneous Training [27.492821176616815]
Federated Learning (FL) enables devices or clients to collaboratively train machine learning (ML) models without sharing their private data.
Much of the existing work in FL focuses on efficiently learning a model for a single task.
In this paper, we propose simultaneous training of multiple FL models using a common set of datasets.
arXiv Detail & Related papers (2024-06-01T05:14:20Z) - Loop Improvement: An Efficient Approach for Extracting Shared Features from Heterogeneous Data without Central Server [16.249442761713322]
"Loop Improvement" (LI) is a novel method enhancing this separation and feature extraction without necessitating a central server or data interchange among participants.
In personalized federated learning environments, LI consistently outperforms the advanced FedALA algorithm in accuracy across diverse scenarios.
LI's adaptability extends to multi-task learning, streamlining the extraction of common features across tasks and obviating the need for simultaneous training.
arXiv Detail & Related papers (2024-03-21T12:59:24Z) - Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters [65.15700861265432]
We present a parameter-efficient continual learning framework to alleviate long-term forgetting in incremental learning with vision-language models.
Our approach involves the dynamic expansion of a pre-trained CLIP model, through the integration of Mixture-of-Experts (MoE) adapters.
To preserve the zero-shot recognition capability of vision-language models, we introduce a Distribution Discriminative Auto-Selector.
arXiv Detail & Related papers (2024-03-18T08:00:23Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - FilFL: Client Filtering for Optimized Client Participation in Federated Learning [71.46173076298957]
Federated learning enables clients to collaboratively train a model without exchanging local data.
Clients participating in the training process significantly impact the convergence rate, learning efficiency, and model generalization.
We propose a novel approach, client filtering, to improve model generalization and optimize client participation and training.
arXiv Detail & Related papers (2023-02-13T18:55:31Z) - Real-time End-to-End Federated Learning: An Automotive Case Study [16.79939549201032]
We introduce an approach to real-time end-to-end Federated Learning combined with a novel asynchronous model aggregation protocol.
Our results show that asynchronous Federated Learning can significantly improve the prediction performance of local edge models and reach the same accuracy level as the centralized machine learning method.
arXiv Detail & Related papers (2021-03-22T14:16:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.