Model Pruning Enables Localized and Efficient Federated Learning for
Yield Forecasting and Data Sharing
- URL: http://arxiv.org/abs/2304.09876v1
- Date: Wed, 19 Apr 2023 17:53:43 GMT
- Title: Model Pruning Enables Localized and Efficient Federated Learning for
Yield Forecasting and Data Sharing
- Authors: Andy Li, Milan Markovic, Peter Edwards and Georgios Leontidis
- Abstract summary: Federated Learning (FL) presents a decentralized approach to model training in the agri-food sector.
This paper proposes a new technical solution that utilizes network pruning on client models and aggregates the pruned models.
We experiment with a soybean yield forecasting dataset and find that this approach can improve inference performance by 15.5% to 20% compared to FedAvg.
- Score: 6.4742178124596625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) presents a decentralized approach to model training
in the agri-food sector and offers the potential for improved machine learning
performance, while ensuring the safety and privacy of individual farms or data
silos. However, the conventional FL approach has two major limitations. First,
the heterogeneous data on individual silos can cause the global model to
perform well for some clients but not all, as the update direction on some
clients may hinder others after they are aggregated. Second, it is lacking with
respect to the efficiency perspective concerning communication costs during FL
and large model sizes. This paper proposes a new technical solution that
utilizes network pruning on client models and aggregates the pruned models.
This method enables local models to be tailored to their respective data
distribution and mitigate the data heterogeneity present in agri-food data.
Moreover, it allows for more compact models that consume less data during
transmission. We experiment with a soybean yield forecasting dataset and find
that this approach can improve inference performance by 15.5% to 20% compared
to FedAvg, while reducing local model sizes by up to 84% and the data volume
communicated between the clients and the server by 57.1% to 64.7%.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - The Effects of Data Imbalance Under a Federated Learning Approach for
Credit Risk Forecasting [0.0]
Credit risk forecasting plays a crucial role for commercial banks and other financial institutions in granting loans to customers.
Traditional machine learning methods require the sharing of sensitive client information with an external server to build a global model.
A newly developed privacy-preserving distributed machine learning technique known as Federated Learning (FL) allows the training of a global model without the necessity of accessing private local data directly.
arXiv Detail & Related papers (2024-01-14T09:15:10Z) - Efficient Personalized Federated Learning via Sparse Model-Adaptation [47.088124462925684]
Federated Learning (FL) aims to train machine learning models for multiple clients without sharing their own private data.
We propose pFedGate for efficient personalized FL by adaptively and efficiently learning sparse local models.
We show that pFedGate achieves superior global accuracy, individual accuracy and efficiency simultaneously over state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T12:21:34Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Data Selection for Efficient Model Update in Federated Learning [0.07614628596146598]
We propose to reduce the amount of local data that is needed to train a global model.
We do this by splitting the model into a lower part for generic feature extraction and an upper part that is more sensitive to the characteristics of the local data.
Our experiments show that less than 1% of the local data can transfer the characteristics of the client data to the global model.
arXiv Detail & Related papers (2021-11-05T14:07:06Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - FedProf: Optimizing Federated Learning with Dynamic Data Profiling [9.74942069718191]
Federated Learning (FL) has shown great potential as a privacy-preserving solution to learning from decentralized data.
A large proportion of the clients are probably in possession of only low-quality data that are biased, noisy or even irrelevant.
We propose a novel approach to optimizing FL under such circumstances without breaching data privacy.
arXiv Detail & Related papers (2021-02-02T20:10:14Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.