Plankton-FL: Exploration of Federated Learning for Privacy-Preserving
Training of Deep Neural Networks for Phytoplankton Classification
- URL: http://arxiv.org/abs/2212.08990v1
- Date: Sun, 18 Dec 2022 02:11:03 GMT
- Title: Plankton-FL: Exploration of Federated Learning for Privacy-Preserving
Training of Deep Neural Networks for Phytoplankton Classification
- Authors: Daniel Zhang, Vikram Voleti, Alexander Wong and Jason Deglint
- Abstract summary: In this study, we explore the feasibility of leveraging federated learning for privacy-preserving training of deep neural networks for phytoplankton classification.
We simulate two different federated learning frameworks, federated learning (FL) and mutually exclusive FL (ME-FL)
Experimental results from this study demonstrate the feasibility and potential of federated learning for phytoplankton monitoring.
- Score: 81.04987357598802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Creating high-performance generalizable deep neural networks for
phytoplankton monitoring requires utilizing large-scale data coming from
diverse global water sources. A major challenge to training such networks lies
in data privacy, where data collected at different facilities are often
restricted from being transferred to a centralized location. A promising
approach to overcome this challenge is federated learning, where training is
done at site level on local data, and only the model parameters are exchanged
over the network to generate a global model. In this study, we explore the
feasibility of leveraging federated learning for privacy-preserving training of
deep neural networks for phytoplankton classification. More specifically, we
simulate two different federated learning frameworks, federated learning (FL)
and mutually exclusive FL (ME-FL), and compare their performance to a
traditional centralized learning (CL) framework. Experimental results from this
study demonstrate the feasibility and potential of federated learning for
phytoplankton monitoring.
Related papers
- Initialisation and Network Effects in Decentralised Federated Learning [1.5961625979922607]
Decentralised federated learning enables collaborative training of individual machine learning models on a distributed network of communicating devices.
This approach avoids central coordination, enhances data privacy and eliminates the risk of a single point of failure.
We propose a strategy for uncoordinated initialisation of the artificial neural networks based on the distribution of eigenvector centralities of the underlying communication network.
arXiv Detail & Related papers (2024-03-23T14:24:36Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - An effective theory of collective deep learning [1.3812010983144802]
We introduce a minimal model that condenses several recent decentralized algorithms.
We derive an effective theory for linear networks to show that the coarse-grained behavior of our system is equivalent to a deformed Ginzburg-Landau model.
We validate the theory in coupled ensembles of realistic neural networks trained on the MNIST dataset.
arXiv Detail & Related papers (2023-10-19T14:58:20Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - FedDBL: Communication and Data Efficient Federated Deep-Broad Learning
for Histopathological Tissue Classification [65.7405397206767]
We propose Federated Deep-Broad Learning (FedDBL) to achieve superior classification performance with limited training samples and only one-round communication.
FedDBL greatly outperforms the competitors with only one-round communication and limited training samples, while it even achieves comparable performance with the ones under multiple-round communications.
Since no data or deep model sharing across different clients, the privacy issue is well-solved and the model security is guaranteed with no model inversion attack risk.
arXiv Detail & Related papers (2023-02-24T14:27:41Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - On the Importance and Applicability of Pre-Training for Federated
Learning [28.238484580662785]
We conduct a systematic study to explore pre-training for federated learning.
We find that pre-training can improve FL, but also close its accuracy gap to the counterpart centralized learning.
We conclude our paper with an attempt to understand the effect of pre-training on FL.
arXiv Detail & Related papers (2022-06-23T06:02:33Z) - Supernet Training for Federated Image Classification under System
Heterogeneity [15.2292571922932]
In this work, we propose a novel framework to consider both scenarios, namely Federation of Supernet Training (FedSup)
It is inspired by how averaging parameters in the model aggregation stage of Federated Learning (FL) is similar to weight-sharing in supernet training.
Under our framework, we present an efficient algorithm (E-FedSup) by sending the sub-model to clients in the broadcast stage for reducing communication costs and training overhead.
arXiv Detail & Related papers (2022-06-03T02:21:01Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Segmented Federated Learning for Adaptive Intrusion Detection System [0.6445605125467573]
Cyberattacks cause organizations great financial, and reputation harm.
Current network intrusion detection systems (NIDS) seem to be insufficent.
We propose a Segmented-Federated Learning (Segmented-FL) learning scheme for a more efficient NIDS.
arXiv Detail & Related papers (2021-07-02T07:47:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.