Privacy-Preserving Intrusion Detection in Software-defined VANET using Federated Learning with BERT
- URL: http://arxiv.org/abs/2401.07343v2
- Date: Wed, 17 Jan 2024 09:25:48 GMT
- Title: Privacy-Preserving Intrusion Detection in Software-defined VANET using Federated Learning with BERT
- Authors: Shakil Ibne Ahsan, Phil Legg, S M Iftekharul Alam,
- Abstract summary: The present study introduces a novel approach for intrusion detection using Federated Learning (FL) capabilities.
FL-BERT has yielded promising results, opening avenues for further investigation in this particular area of research.
Our results suggest that FL-BERT is a promising technique for enhancing attack detection.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The absence of robust security protocols renders the VANET (Vehicle ad-hoc Networks) network open to cyber threats by compromising passengers and road safety. Intrusion Detection Systems (IDS) are widely employed to detect network security threats. With vehicles' high mobility on the road and diverse environments, VANETs devise ever-changing network topologies, lack privacy and security, and have limited bandwidth efficiency. The absence of privacy precautions, End-to-End Encryption methods, and Local Data Processing systems in VANET also present many privacy and security difficulties. So, assessing whether a novel real-time processing IDS approach can be utilized for this emerging technology is crucial. The present study introduces a novel approach for intrusion detection using Federated Learning (FL) capabilities in conjunction with the BERT model for sequence classification (FL-BERT). The significance of data privacy is duly recognized. According to FL methodology, each client has its own local model and dataset. They train their models locally and then send the model's weights to the server. After aggregation, the server aggregates the weights from all clients to update a global model. After aggregation, the global model's weights are shared with the clients. This practice guarantees the secure storage of sensitive raw data on individual clients' devices, effectively protecting privacy. After conducting the federated learning procedure, we assessed our models' performance using a separate test dataset. The FL-BERT technique has yielded promising results, opening avenues for further investigation in this particular area of research. We reached the result of our approaches by comparing existing research works and found that FL-BERT is more effective for privacy and security concerns. Our results suggest that FL-BERT is a promising technique for enhancing attack detection.
Related papers
- ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - A New Implementation of Federated Learning for Privacy and Security
Enhancement [27.612480082254486]
Federated learning (FL) has emerged as a new machine learning setting.
No local data needs to be shared, and privacy can be well protected.
We propose a model update based federated averaging algorithm to defend against Byzantine attacks.
arXiv Detail & Related papers (2022-08-03T03:13:19Z) - Privacy-Preserving Federated Learning via System Immersion and Random
Matrix Encryption [4.258856853258348]
Federated learning (FL) has emerged as a privacy solution for collaborative distributed learning where clients train AI models directly on their devices instead of sharing their data with a centralized (potentially adversarial) server.
We propose a Privacy-Preserving Federated Learning (PPFL) framework built on the synergy of matrix encryption and system immersion tools from control theory.
We show that our algorithm provides the same level of accuracy and convergence rate as the standard FL with a negligible cost while revealing no information about clients' data.
arXiv Detail & Related papers (2022-04-05T21:28:59Z) - Robust Semi-supervised Federated Learning for Images Automatic
Recognition in Internet of Drones [57.468730437381076]
We present a Semi-supervised Federated Learning (SSFL) framework for privacy-preserving UAV image recognition.
There are significant differences in the number, features, and distribution of local data collected by UAVs using different camera modules.
We propose an aggregation rule based on the frequency of the client's participation in training, namely the FedFreq aggregation rule.
arXiv Detail & Related papers (2022-01-03T16:49:33Z) - PRECAD: Privacy-Preserving and Robust Federated Learning via
Crypto-Aided Differential Privacy [14.678119872268198]
Federated Learning (FL) allows multiple participating clients to train machine learning models collaboratively by keeping their datasets local and only exchanging model updates.
Existing FL protocol designs have been shown to be vulnerable to attacks that aim to compromise data privacy and/or model robustness.
We develop a framework called PRECAD, which simultaneously achieves differential privacy (DP) and enhances robustness against model poisoning attacks with the help of cryptography.
arXiv Detail & Related papers (2021-10-22T04:08:42Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Federated Intrusion Detection for IoT with Heterogeneous Cohort Privacy [0.0]
Internet of Things (IoT) devices are becoming increasingly popular and are influencing many application domains such as healthcare and transportation.
In this work, we look at differentially private (DP) neural network (NN) based network intrusion detection systems (NIDS) to detect intrusion attacks on networks of such IoT devices.
Existing NN training solutions in this domain either ignore privacy considerations or assume that the privacy requirements are homogeneous across all users.
We show that the performance of existing differentially private methods degrade for clients with non-identical data distributions when clients' privacy requirements are heterogeneous.
arXiv Detail & Related papers (2021-01-25T03:33:27Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Privacy-preserving Traffic Flow Prediction: A Federated Learning
Approach [61.64006416975458]
We propose a privacy-preserving machine learning technique named Federated Learning-based Gated Recurrent Unit neural network algorithm (FedGRU) for traffic flow prediction.
FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism.
It is shown that FedGRU's prediction accuracy is 90.96% higher than the advanced deep learning models.
arXiv Detail & Related papers (2020-03-19T13:07:49Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.