FedSA: Accelerating Intrusion Detection in Collaborative Environments
with Federated Simulated Annealing
- URL: http://arxiv.org/abs/2205.11519v1
- Date: Mon, 23 May 2022 14:27:56 GMT
- Title: FedSA: Accelerating Intrusion Detection in Collaborative Environments
with Federated Simulated Annealing
- Authors: Helio N. Cunha Neto, Ivana Dusparic, Diogo M. F. Mattos, and Natalia
C. Fernandes
- Abstract summary: Federated learning emerges as a solution to collaborative training for an Intrusion Detection System (IDS)
This paper proposes the Federated Simulated Annealing (FedSA) metaheuristic to select the hyper parameters and a subset of participants for each aggregation round in federated learning.
The proposal requires up to 50% fewer aggregation rounds to achieve approximately 97% accuracy in attack detection than the conventional aggregation approach.
- Score: 2.7011265453906983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fast identification of new network attack patterns is crucial for improving
network security. Nevertheless, identifying an ongoing attack in a
heterogeneous network is a non-trivial task. Federated learning emerges as a
solution to collaborative training for an Intrusion Detection System (IDS). The
federated learning-based IDS trains a global model using local machine learning
models provided by federated participants without sharing local data. However,
optimization challenges are intrinsic to federated learning. This paper
proposes the Federated Simulated Annealing (FedSA) metaheuristic to select the
hyperparameters and a subset of participants for each aggregation round in
federated learning. FedSA optimizes hyperparameters linked to the global model
convergence. The proposal reduces aggregation rounds and speeds up convergence.
Thus, FedSA accelerates learning extraction from local models, requiring fewer
IDS updates. The proposal assessment shows that the FedSA global model
converges in less than ten communication rounds. The proposal requires up to
50% fewer aggregation rounds to achieve approximately 97% accuracy in attack
detection than the conventional aggregation approach.
Related papers
- FedFa: A Fully Asynchronous Training Paradigm for Federated Learning [14.4313600357833]
Federated learning is an efficient decentralized training paradigm for scaling the machine learning model training on a large number of devices.
Recent state-of-the-art solutions propose using semi-asynchronous approaches to mitigate the waiting time cost with guaranteed convergence.
We propose a full asynchronous training paradigm, called FedFa, which can guarantee model convergence and eliminate the waiting time completely.
arXiv Detail & Related papers (2024-04-17T02:46:59Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - FedLPA: One-shot Federated Learning with Layer-Wise Posterior Aggregation [7.052566906745796]
FedLPA is a layer-wise posterior aggregation method for federated learning.
We show that FedLPA significantly improves learning performance over state-of-the-art methods across several metrics.
arXiv Detail & Related papers (2023-09-30T10:51:27Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Magnitude Matters: Fixing SIGNSGD Through Magnitude-Aware Sparsification
in the Presence of Data Heterogeneity [60.791736094073]
Communication overhead has become one of the major bottlenecks in the distributed training of deep neural networks.
We propose a magnitude-driven sparsification scheme, which addresses the non-convergence issue of SIGNSGD.
The proposed scheme is validated through experiments on Fashion-MNIST, CIFAR-10, and CIFAR-100 datasets.
arXiv Detail & Related papers (2023-02-19T17:42:35Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Speeding up Heterogeneous Federated Learning with Sequentially Trained
Superclients [19.496278017418113]
Federated Learning (FL) allows training machine learning models in privacy-constrained scenarios by enabling the cooperation of edge devices without requiring local data sharing.
This approach raises several challenges due to the different statistical distribution of the local datasets and the clients' computational heterogeneity.
We propose FedSeq, a novel framework leveraging the sequential training of subgroups of heterogeneous clients, i.e. superclients, to emulate the centralized paradigm in a privacy-compliant way.
arXiv Detail & Related papers (2022-01-26T12:33:23Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - FedSAE: A Novel Self-Adaptive Federated Learning Framework in
Heterogeneous Systems [14.242716751043533]
Federated Learning (FL) is a novel distributed machine learning which allows thousands of edge devices to train model locally without uploading data concentrically to the server.
We introduce a novel self-adaptive federated framework FedSAE which adjusts the training task of devices automatically and selects participants actively to alleviate the performance degradation.
In our framework, the server evaluates devices' value of training based on their training loss. Then the server selects those clients with bigger value for the global model to reduce communication overhead.
arXiv Detail & Related papers (2021-04-15T15:14:11Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Free-rider Attacks on Model Aggregation in Federated Learning [10.312968200748116]
We introduce here the first theoretical and experimental analysis of free-rider attacks on federated learning schemes based on iterative parameters aggregation.
We provide formal guarantees for these attacks to converge to the aggregated models of the fair participants.
We conclude by providing recommendations to avoid free-rider attacks in real world applications of federated learning.
arXiv Detail & Related papers (2020-06-21T20:20:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.