FedSA: Accelerating Intrusion Detection in Collaborative Environments
with Federated Simulated Annealing
- URL: http://arxiv.org/abs/2205.11519v1
- Date: Mon, 23 May 2022 14:27:56 GMT
- Title: FedSA: Accelerating Intrusion Detection in Collaborative Environments
with Federated Simulated Annealing
- Authors: Helio N. Cunha Neto, Ivana Dusparic, Diogo M. F. Mattos, and Natalia
C. Fernandes
- Abstract summary: Federated learning emerges as a solution to collaborative training for an Intrusion Detection System (IDS)
This paper proposes the Federated Simulated Annealing (FedSA) metaheuristic to select the hyper parameters and a subset of participants for each aggregation round in federated learning.
The proposal requires up to 50% fewer aggregation rounds to achieve approximately 97% accuracy in attack detection than the conventional aggregation approach.
- Score: 2.7011265453906983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fast identification of new network attack patterns is crucial for improving
network security. Nevertheless, identifying an ongoing attack in a
heterogeneous network is a non-trivial task. Federated learning emerges as a
solution to collaborative training for an Intrusion Detection System (IDS). The
federated learning-based IDS trains a global model using local machine learning
models provided by federated participants without sharing local data. However,
optimization challenges are intrinsic to federated learning. This paper
proposes the Federated Simulated Annealing (FedSA) metaheuristic to select the
hyperparameters and a subset of participants for each aggregation round in
federated learning. FedSA optimizes hyperparameters linked to the global model
convergence. The proposal reduces aggregation rounds and speeds up convergence.
Thus, FedSA accelerates learning extraction from local models, requiring fewer
IDS updates. The proposal assessment shows that the FedSA global model
converges in less than ten communication rounds. The proposal requires up to
50% fewer aggregation rounds to achieve approximately 97% accuracy in attack
detection than the conventional aggregation approach.
Related papers
- FedMSE: Federated learning for IoT network intrusion detection [0.0]
The rise of IoT has expanded the cyber attack surface, making traditional centralized machine learning methods insufficient due to concerns about data availability, computational resources, transfer costs, and especially privacy preservation.
A semi-supervised federated learning model was developed to overcome these issues, combining the Shrink Autoencoder and Centroid one-class classifier (SAE-CEN)
This approach enhances the performance of intrusion detection by effectively representing normal network data and accurately identifying anomalies in the decentralized strategy.
arXiv Detail & Related papers (2024-10-18T02:23:57Z) - DAMe: Personalized Federated Social Event Detection with Dual Aggregation Mechanism [55.45581907514175]
This paper proposes a personalized federated learning framework with a dual aggregation mechanism for social event detection, namely DAMe.
We introduce a global aggregation strategy to provide clients with maximum external knowledge of their preferences.
In addition, we incorporate a global-local event-centric constraint to prevent local overfitting and client-drift''
arXiv Detail & Related papers (2024-09-01T04:56:41Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - FedDCT: A Dynamic Cross-Tier Federated Learning Framework in Wireless Networks [5.914766366715661]
Federated Learning (FL) trains a global model across devices without exposing local data.
resource heterogeneity and inevitable stragglers in wireless networks severely impact the efficiency and accuracy of FL training.
We propose a novel Dynamic Cross-Tier Federated Learning framework (FedDCT)
arXiv Detail & Related papers (2023-07-10T08:54:07Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Speeding up Heterogeneous Federated Learning with Sequentially Trained
Superclients [19.496278017418113]
Federated Learning (FL) allows training machine learning models in privacy-constrained scenarios by enabling the cooperation of edge devices without requiring local data sharing.
This approach raises several challenges due to the different statistical distribution of the local datasets and the clients' computational heterogeneity.
We propose FedSeq, a novel framework leveraging the sequential training of subgroups of heterogeneous clients, i.e. superclients, to emulate the centralized paradigm in a privacy-compliant way.
arXiv Detail & Related papers (2022-01-26T12:33:23Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - FedSAE: A Novel Self-Adaptive Federated Learning Framework in
Heterogeneous Systems [14.242716751043533]
Federated Learning (FL) is a novel distributed machine learning which allows thousands of edge devices to train model locally without uploading data concentrically to the server.
We introduce a novel self-adaptive federated framework FedSAE which adjusts the training task of devices automatically and selects participants actively to alleviate the performance degradation.
In our framework, the server evaluates devices' value of training based on their training loss. Then the server selects those clients with bigger value for the global model to reduce communication overhead.
arXiv Detail & Related papers (2021-04-15T15:14:11Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Free-rider Attacks on Model Aggregation in Federated Learning [10.312968200748116]
We introduce here the first theoretical and experimental analysis of free-rider attacks on federated learning schemes based on iterative parameters aggregation.
We provide formal guarantees for these attacks to converge to the aggregated models of the fair participants.
We conclude by providing recommendations to avoid free-rider attacks in real world applications of federated learning.
arXiv Detail & Related papers (2020-06-21T20:20:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.