Communication-Efficient Federated Learning Based on Explanation-Guided Pruning for Remote Sensing Image Classification
- URL: http://arxiv.org/abs/2501.11493v1
- Date: Mon, 20 Jan 2025 13:59:41 GMT
- Title: Communication-Efficient Federated Learning Based on Explanation-Guided Pruning for Remote Sensing Image Classification
- Authors: Jonas Klotz, Barış Büyüktaş, Begüm Demir,
- Abstract summary: We introduce an explanation-guided pruning strategy for communication-efficient Federated Learning (FL)
Our strategy effectively reduces the number of shared model updates, while increasing the ability of the global model.
The code of this work will be publicly available at https://git.tu-berlin.de/rsim/FL-LRP.
- Score: 2.725507329935916
- License:
- Abstract: Federated learning (FL) is a decentralized machine learning paradigm, where multiple clients collaboratively train a global model by exchanging only model updates with the central server without sharing the local data of clients. Due to the large volume of model updates required to be transmitted between clients and the central server, most FL systems are associated with high transfer costs (i.e., communication overhead). This issue is more critical for operational applications in remote sensing (RS), especially when large-scale RS data is processed and analyzed through FL systems with restricted communication bandwidth. To address this issue, we introduce an explanation-guided pruning strategy for communication-efficient FL in the context of RS image classification. Our pruning strategy is defined based on the layerwise relevance propagation (LRP) driven explanations to: 1) efficiently and effectively identify the most relevant and informative model parameters (to be exchanged between clients and the central server); and 2) eliminate the non-informative ones to minimize the volume of model updates. The experimental results on the BigEarthNet-S2 dataset demonstrate that our strategy effectively reduces the number of shared model updates, while increasing the generalization ability of the global model. The code of this work will be publicly available at https://git.tu-berlin.de/rsim/FL-LRP
Related papers
- SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
To optimize the pruning process itself, only thresholds are communicated between a server and clients instead of parameters.
Global thresholds are used to update model parameters by extracting aggregated parameter importance.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Communication and Storage Efficient Federated Split Learning [19.369076939064904]
Federated Split Learning preserves the parallel model training principle of FL.
Server has to maintain separate models for every client, resulting in a significant computation and storage requirement.
This paper proposes a communication and storage efficient federated and split learning strategy.
arXiv Detail & Related papers (2023-02-11T04:44:29Z) - FedNet2Net: Saving Communication and Computations in Federated Learning
with Model Growing [0.0]
Federated learning (FL) is a recently developed area of machine learning.
In this paper, a novel scheme based on the notion of "model growing" is proposed.
The proposed approach is tested extensively on three standard benchmarks and is shown to achieve substantial reduction in communication and client computation.
arXiv Detail & Related papers (2022-07-19T21:54:53Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Communication-Efficient Online Federated Learning Framework for
Nonlinear Regression [5.67468104295976]
This paper presents a partial-sharing-based online federated learning framework (PSO-Fed)
PSO-Fed enables clients to update their local models using continuous streaming data and share only portions of those updated models with the server.
Experimental results show that PSO-Fed can achieve competitive performance with a significantly lower communication overhead than Online-Fed.
arXiv Detail & Related papers (2021-10-13T08:11:34Z) - FedKD: Communication Efficient Federated Learning via Knowledge
Distillation [56.886414139084216]
Federated learning is widely used to learn intelligent models from decentralized data.
In federated learning, clients need to communicate their local model updates in each iteration of model learning.
We propose a communication efficient federated learning method based on knowledge distillation.
arXiv Detail & Related papers (2021-08-30T15:39:54Z) - RC-SSFL: Towards Robust and Communication-efficient Semi-supervised
Federated Learning System [25.84191221776459]
Federated Learning (FL) is an emerging decentralized artificial intelligence paradigm.
Current systems rely heavily on a strong assumption: all clients have a wealth of ground truth labeled data.
We present a practical Robust, and Communication-efficient Semi-supervised FL (RC-SSFL) system design.
arXiv Detail & Related papers (2020-12-08T14:02:56Z) - Communication-Efficient Federated Learning via Optimal Client Sampling [20.757477553095637]
Federated learning (FL) ameliorates privacy concerns in settings where a central server coordinates learning from data distributed across many clients.
We propose a novel, simple and efficient way of updating the central model in communication-constrained settings.
We test this policy on a synthetic dataset for logistic regression and two FL benchmarks, namely, a classification task on EMNIST and a realistic language modeling task.
arXiv Detail & Related papers (2020-07-30T02:58:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.